error
Subscriptions are not available for this site while you are logged into your current account.
close
Skip to main content

Loading

The page is loading.

Please wait...


      The emergence of agentic AI systems has the potential to significantly reshape online retail ecosystems and, in turn, how businesses sell and market their products online. Unlike traditional AI tools that assist human decision-making, agentic systems are capable of autonomously interacting with digital services, analysing consumer data and executing actions such as recommendations, negotiations or transactions.

      While these capabilities may unlock efficiency and new commercial opportunities, as they begin to mediate interactions between businesses and consumers, they also introduce new legal risks across several regulatory domains (such as consumer protection, data protection and competition law) that businesses need to pay attention to. Recent guidance from the Competition and Markets Authority (CMA) and the Information Commissioner’s Office (ICO) suggests that businesses deploying agentic AI systems will need to carefully consider how existing legal frameworks apply to increasingly autonomous digital environments.

      This article explores the emerging legal challenges associated with agentic AI in online retail, focusing on consumer law risks linked to digital design practices, the data protection implications of increasingly autonomous systems, the competition concerns arising from the evolving AI infrastructure landscape and what this could mean for businesses and consumers.

      Lisa Navarro

      Head of Regulatory Law, KPMG Law

      KPMG in the UK



      Key takeaways for businesses

      In a business-to-consumer context, consumer law applies to AI agents in the same way it applies to human agents. It also applies to businesses’ online design choices and how they impact consumers’ decision-making. Misleading consumers via AI agents may give rise to consumer law breaches that fall within the CMA’s new direct enforcement regime, exposing businesses to significant fining powers.

      Recent commentary from the ICO reminds us that businesses remain responsible for ensuring that agentic AI systems comply with existing data protection obligations and that innovation must not come at the expense of individuals’ information rights.

      Competition law compliance is also likely to be under the spotlight because of:

      • potential concerns around algorithmic collusion; and
      • the potential abuse of dominance concerns that might arise where the development and deployment of agentic AI is increasingly depending on a small number of technology providers, infrastructure platforms and data ecosystems.

      Agentic AI is expected to become increasingly prevalent in e‑commerce, with studies indicating that agents could drive up to 15% of European e‑commerce spending by 2030.  This is not without risk for businesses and consumers and the recently published 2026 International AI Safety Report led by Turing Award winner Yoshua Bengio sets out the risks of general-purpose AI systems:


      • AI systems are capable of manipulating people’s thoughts or decisions by generating content that influences their beliefs and behaviours. Studies have found that in an experimental setting AI generated content can influence people’s beliefs at least as effectively as non-expert humans can;
      • possible harms of AI manipulation can range from individual exploitation to systemic erosion of trust; and
      • AI generated content can be used in social engineering to manipulate people into sending money (for example through a phishing attack convincing a person to make financial transfers) or gathering sensitive information.

      Where AI agents are being utilised as a tool to interact with consumers in the context of transactional decisions, businesses must manage the risk that such outputs do not mislead or manipulate.

      CMA Focus
       

      Following the entry into force of its consumer protection enforcement powers under the DMCCA⁵ a year ago, the CMA has been supporting businesses understand and apply the updated legal framework through the provision of targeted guidance.  In March 2026 the CMA published its views on complying with consumer law when using AI agents. The guidance makes it clear that UK consumer law applies regardless of whether consumer interactions are with employees or AI agents, even where those AI agents are procured through a third party service provider. Given the breadth of activities an AI agent may be called on to undertake (from responding customer queries, processing refunds through to running marketing campaigns) businesses should be alert to this area of risk.  The guidance emphasises that transparency with customers will be key, as will an effective process for training, monitoring and refining the AI Agents.

      The CMA has also considered online choice architecture where digital interfaces deploy misleading features such as false urgency claims (any scarcity, popularity, ‘act fast’ or time limited claim that is presented to consumers online). Building on its 2022 discussion paper, the CMA followed through with an open warning letter in 2023 and has ongoing enforcement cases in this space. Not only might such architecture help businesses maintain, leverage, and exploit market power by making it easier to retain consumers or redirect them within digital ecosystems, it can also distort consumer behaviour and cause consumers to buy more than they actually want at higher prices and after spending less time searching.

      Among other initiatives, the CMA issued a joint paper with the ICO on harmful design in digital markets, explaining how consumers’ ability to exercise meaningful choice and control is fundamental to effective data protection, consumer protection, and competition regulation. The joint paper sets out examples of potentially harmful digital design practices:


      • Harmful nudges

        making it easy or “nudging” users to make inadvertent or ill-considered decisions.

      • Confirmshaming

        pressuring or shaming someone into doing something by making them feel guilty or embarrassed for not doing it.

      • Biased framing

        presenting the supposed benefits or risks of a choice in a way which makes it harder for users to assess relevant information and make informed choices. For example, this can be by distorting their decision-making by leading them to the more favourably framed choice or away from an unfavourable framed one.

      • Bundled consent

        asking the user to consent to the use of their personal information for multiple separate purposes or processing activities via a single consent option.

      • Default settings

        applying a predefined choice that the user must take active steps to change.

      The ICO has emphasised that innovation in agentic AI must not come at the expense of individuals’ information rights. Businesses remain responsible for ensuring that the agentic systems they develop, deploy or integrate comply with existing data protection obligations, as highlighted in the ICO report AI’ll get that! Agentic commerce could signal the dawn of personal shopping ‘AI-gents’.

      Importantly, the ICO highlights that the design and architecture of agentic systems directly influence how data protection law applies and how individuals can exercise their rights. Systems that lack clearly defined purposes, access unnecessary databases or lack effective monitoring mechanisms may significantly increase the risk of privacy harms.

      Agentic AI therefore does not introduce entirely new legal obligations. Instead, it intensifies the practical application of existing data protection principles, particularly those relating to transparency, purpose limitation, data minimisation, automated decision-making, accountability and accuracy.


      • Human responsibility and controllership

        Despite the language used to describe “AI agents”, the ICO makes clear that agentic systems do not remove organisational responsibility for data processing in line with individuals’ rights. Businesses remain accountable for how personal data is used by the systems they deploy. This reinforces the importance of strong governance frameworks, including appropriate risk assessments and internal oversight mechanisms. Where agentic systems may present high risks to individuals’ rights and freedoms, businesses must conduct a Data Protection Impact Assessment (DPIA) prior to deployment.

      • Transparency and explainability

        Transparency obligations may become more complex as agentic systems evolve. The ICO notes that the emergence of multi-agent architectures and agent-to-agent interactions may reduce visibility into how personal data is processed and how decisions are reached about individuals. In retail environments, this may affect areas such as personalised recommendations, automated promotions or customer service interactions. Businesses should therefore consider how consumers are informed about the use of automation and how meaningful information can be provided regarding the role of AI systems in influencing outcomes.

      • Purpose limitation and data minimisation

        Agentic systems may require access to multiple tools and datasets in order to complete tasks. However, businesses must ensure that processing purposes remain clearly defined and proportionate. The ICO warns that defining purposes too broadly in order to accommodate potential system behaviour may conflict with the principle of purpose limitation. Similarly, businesses should ensure that agentic systems only access the personal data necessary to perform their function. Technical and organisational controls such as permission management, data masking, monitoring mechanisms and transparency notices may help ensure compliance with the principle of data minimisation.

      • Automated decision-making (ADM) and individual rights

        Agentic systems may perform actions that affect individuals, such as approving refunds, prioritising services or generating personalised offers. Data protection legislation requires businesses to inform individuals when significant automated decisions affect them and to ensure that individuals can contest such decisions or request human intervention. The ICO therefore emphasises the importance of maintaining meaningful human oversight within agentic systems. In practice, businesses should ensure that automated processes are transparent and that governance mechanisms exist to review and correct automated outcomes where necessary.

      • Accuracy

        Agentic systems rely on probabilistic AI models (i.e. those that make decisions based on likelihoods and probabilities) that may occasionally produce inaccurate outputs. The ICO highlights that inaccurate information stored in an agentic system’s memory may influence multiple subsequent decisions, potentially amplifying errors across different actions. Ensuring the quality and accuracy of the data used by these systems is therefore critical. Businesses deploying agentic AI should implement monitoring and correction mechanisms capable of identifying and addressing inaccurate outputs before they lead to harmful outcomes. Examples of such mechanists include chain of thought and retrieval augmented generation (RAG) as well as further context-specific fine-tuning, all of which can enhance the accuracy of LLMs.


      Interconnections
       

      Although agentic AI raises distinct questions in each regulatory domain, the underlying risks are often interconnected. Systems that rely heavily on personal data to enable autonomous decision-making also tends to depend on complex technological infrastructures, including advanced models, cloud platforms and large-scale datasets. This creates a natural intersection between data protection considerations and broader competition concerns in digital markets.

      From a data protection perspective, concerns primarily relate to how personal data is collected, analysed and used to enable autonomous system behaviour. The use of large volumes of behavioural data, combined with ADM, raises questions around transparency, fairness and accountability.

      Beyond consumer protection and data governance considerations, the increasing reliance on powerful AI infrastructure introduces additional challenges from a competition law perspective. As agentic AI systems depend on sophisticated models, specialised computing resources and access to large datasets, the structure of the AI value chain itself may influence market dynamics and competitive conditions.

      As the economic and social impacts of AI accelerate alongside rapid expansion of the underlying infrastructure, the OECD and competition regulators have recognised the potential for competition law risk from several features of the AI infrastructure value chain,  including the focus on innovation and IP, highly concentrated markets with substantial barriers to entry, vertical integration, switching barriers and supply shortages.

      In the UK, the CMA has explored the relationship between AI and collusion, with a particular focus on AI and algorithmic pricing. The CMA is clear on both the potential benefits (efficiency, speed, personalisation of offers) and competition risks associated with algorithmic pricing, such as:


      • rival businesses entering into an explicit agreement to collude and then deploy algorithms to implement, monitor, and enforce that arrangement; or
      • competitors relying on the same algorithm or data hub to facilitate the indirect exchange of competitively sensitive information.

      The underlying theories of harm are not specific to the use of AI, but it is accepted that the emergence of more powerful AI models that develop, implement and monitor the algorithms and resultant pricing can compound the risk – particularly the further removed the humans are from that process.   This raises a particularly challenging question that is likely to be the subject of future litigation – namely the attribution of liability of businesses for autonomous AI systems that learn to collude as a method to maximise profits, even where there no human intent to collude.

      Regulatory scrutiny
       

      The CMA has signalled that it is tackling this head on, with a program of technology horizon scanning that will monitor developments in AI and unpick how that might impact on its competition enforcement work – as well as harnessing agentic AI as a tool to market scan and identify potential infringements.

      The CMA is not alone in these efforts. In France, following analysis of the competitive functioning of the generative AI sector in 2024 and a study on the energy and environmental impact of AI in 2025, the Autorité de la concurrence launched an inquiry into the competitive functioning of the conversational agent sector in January 2026.

      Taken together, these developments illustrate how the rise of agentic AI requires businesses to navigate an increasingly complex regulatory landscape. Consumer protection, data protection and competition law considerations are no longer isolated compliance issues but overlapping dimensions that businesses must address when designing and deploying autonomous AI systems.

      Businesses exploring the deployment of agentic AI systems should begin by assessing how these technologies may affect consumer interactions, personal data processing and market dynamics.

      Businesses must not mislead, manipulate or exert undue pressure on consumers, regardless of whether those outcomes are driven by human decisions, algorithms or the actions of AI agents. This means they should consider the following:


      • Informing consumers

        about the fact that they are interacting with AI agents during the purchase journey (if that is the case) in order to ensure consumers are not misled and are given the information they need to make informed decisions. This may also include giving thought as to whether consumers are given meaningful ways to challenge or verify the outputs of the AI agent.

      • Training

        ensuring that AI agents are appropriately trained on the legal frameworks that apply to their activities. In the same way that employees are subject to compliance training, AI agents should be trained using the applicable law and guidance to help set the context for their broader actions.

      • Developing guardrails around AI agents

        considering what the agent will be set up to do and how that might affect consumers. Guardrails are essentially high-level rule sets determining what the AI agent should or should not do and in the context of using AI agents, businesses should consider critical areas of compliance such as regulatory compliance, privacy, security and ethics when developing such rule sets. Guardrails may be around the following:

        a. disclosure of information by the AI agent to the consumer (for example, limitations, incentives and affiliations). These guardrails should ensure that if it is disclosed by an AI agent, it is disclosed clearly, but also not in a misleading way; or

        b. activity of the AI agent (for example how it acts if it does not obtain a mandatory confirmation for high-risk actions).

        Businesses may also consider implementing multiple layers of safeguards so in the event that one fails, others may still prevent harm.

      • Testing

        with particular focus on specific scenario testing with respect to consumer law, coming up with benchmarking frameworks and scoping the limits of AI agents. It could include multi-step adversarial and long horizon evaluations that stimulate how agents behave over extended interactions. This extends to thinking about frameworks for testing, verification, evaluation and validations ensuring AI systems are fit for purpose.

      • Monitoring

        whether you rely on Claude, OpenAI or any other family of models, exploring basic continual applications of tests ensuring AI agents do not go off course. Businesses should consider:
         

        a. ensuring there is a “kill switch”, fails notifications to alert when a process fails, real time intervention capabilities and clear ownership when problems are uncovered;

        b. monitoring performance, feedback and complaints for signs of deception or other harmful forms of cognition or unethical behaviour with human oversight and clear escalation processes to resolve issues;

        c. maintaining strong audit trails and risk assessment matrices evidencing: action taken to address risks, your resilience to technical interference and pre-defined processing purposes; and

        d. defining clear accountability including liability if agent acts outside customer instructions.

      • Refining AI agents

        this involves understanding where the problem is coming from and what needs to be done. The types of questions that need to be considered are:
         

        a. Does an entire process needs to be shut down or only an aspect needs fixing?

        b. Do models need to be swapped?

        c. Should the prompting techniques be reviewed and modified?

        d. Should the context provided to the agent be refined?

         

        Essentially this ability to refine as needed will be defined by the testing techniques and processes put in place.

      • Protecting Information Rights in Autonomous Systems

        Innovation must not come at the expense of individuals’ information rights, and existing data protection obligations (controller-processor obligations) remain applicable to businesses deploying autonomous agentic systems.

      In addition to technical governance measures, businesses should also carefully assess how the design of digital interfaces and user journeys may influence consumer behaviour. With respect to Digital Design Practices businesses should:


      • Remain user-focused

        deceptive practices such as more than the user would expect is capable of eroding trust and disproportionately affecting vulnerable consumer groups.

      • Use ethical design to build trust and long-term brand loyalty

        this includes researching users’ needs and expectations thoroughly, ensuring design is culturally sensitive and making sure that information about how data will be used is easily accessible to users.

      • Implement data protection by design and by default

        Failing to do so can result in opaque data flows, ADM and complex multi‑agent behaviours in advanced agentic systems. This ultimately makes it more difficult for individuals to understand how their data is used or to effectively exercise their rights.

      • Integrate ethical design into business goals

        for instance, by incorporating user trust as a metric, employing diverse design teams and implementing ethical review frameworks.



      Our legal insights

      Something went wrong

      Oops!! Something went wrong, please try again



      MTD

      Get in touch


      Discover why organisations across the UK trust KPMG to make the difference and how we can help you to do the same.