EU AI Act 2025 in E-Commerce: What Online Shops Must Do Now

Commercial & Contract Law, Competition Law, Intellectual Property Law

Introduction: AI Has Long Been Everyday Reality in E-Commerce

Hardly any online retailer is aware that the AI Act has already triggered the first obligations as of February 2025. Anyone who still believes that AI only concerns large platforms like Amazon, eBay or tech companies like OpenAI risks fines and warning letters – often without even operating an AI system themselves.

Artificial intelligence has quietly but fundamentally transformed online trade. Whether product recommendations, dynamic pricing, chatbots in customer service, fraud prevention, personalised advertising or content creation – AI systems now control almost every central process of a webshop. Many retailers perceive these functions as mere “tools” without realising that they are now part of a strictly regulated legal framework.

With Regulation (EU) 2024/1689 on Artificial Intelligence (AI Act), the EU has for the first time created a binding, horizontal regime for the use of AI. It does not only apply to developers but also to those who use AI – the so-called “deployers”. Online retailers therefore fall within its scope even if they only use external software or SaaS solutions.

The consequence: e-commerce faces a new layer of compliance. In addition to data protection, competition and consumer protection law, autonomous, EU-wide obligations now apply.

The AI Act has been in force since 1 August 2024, but its provisions apply gradually – and retailers have had to act since early 2025.

The Legal Framework: What Does the AI Act Regulate?

The AI Act follows a risk-based approach. What matters is not the technology as such, but the risk that comes with its use. The AI Act distinguishes four levels:

  • Prohibited AI practices (Art. 5): Systems that manipulate people, exploit vulnerabilities or significantly distort behaviour are banned.
  • High-risk systems (Art. 6 in conjunction with Annex III): This includes applications for creditworthiness checks, scoring models or biometric identification. These are subject to strict obligations: risk management, technical documentation, data quality, transparency, conformity assessment and ongoing monitoring.
  • Transparency obligations (Art. 50): Even non-risk systems must disclose when users interact with AI or when content has been generated automatically.
  • Foundation models / General-purpose AI (Art. 53 et seq.): Large text and image generators are subject to additional documentation and disclosure duties – relevant for retailers that integrate such services.

The timeline illustrates the urgency:

  • February 2025: Training obligations for employees working with AI.
  • August 2025: Ban on certain prohibited practices.
  • August 2026: Transparency obligations, e.g. labelling of AI-generated content.
  • August 2027: Full applicability of the high-risk regime.

Practical relevance: early obligations – in particular training and contractual issues – are already in force in e-commerce. Waiting it out is not an option.

Why E-Commerce Is Particularly Affected

Many retailers believe they only use “standard software”. In reality, AI in e-commerce is deeply embedded in core trading processes – and it is precisely there that the AI Act bites.

Chatbots and Customer Communication

Many shops use AI-powered chatbots to handle customer enquiries automatically. Under Art. 50(1) AI Act, users must be informed, at the latest from August 2026, that they are interacting with an AI system.

Example: A chatbot answers delivery queries without the customer knowing they are talking to AI. From 2026 onwards, this would constitute a breach of the AI Act – and at the same time a possible misleading commercial practice under § 5 of the German Unfair Competition Act (UWG), as it creates the impression of human support.

How the labelling must look in practice is still open. Conceivable are visual labels (“AI assistant”), notices in the chat window or automatic introductory sentences. Here, implementing acts of the Commission (Art. 96 AI Act) and guidance by supervisory authorities will be crucial.

Retailers should already take stock of their chatbots now. Early labelling and documentation help avoid pressure to act at a later stage. There are several tools available to run the first check such as EU AI Act Compliance Checker.

Product Recommendations and Personalised Advertising

Recommendation systems and personalised advertising are core functions of many online shops. They analyse purchasing behaviour and display suitable suggestions. Openly disclosed, interest-based personalisation (e.g. based on previous purchases) remains permissible.

It becomes problematic where algorithms are specifically designed to influence behaviour. Under Art. 5(1)(a) AI Act, it is prohibited to use AI systems that employ subliminal, intentionally manipulative or deceptive techniques in order to significantly change a person’s behaviour.

The link to the Digital Services Act (DSA) is obvious: “dark patterns” are already banned for online platforms. Now the topic also moves into focus for online shops as potentially prohibited manipulative AI practice within the meaning of Art. 5 AI Act.

Example: A system identifies gambling addiction based on behavioural patterns and targets users with gambling offers, or an algorithm generates decision pressure (“only 5 items left in stock”) because it has identified a high purchase propensity.

These would be classic cases of prohibited manipulative AI practices. By contrast, transparent, interest-based recommendations based on previous purchases or interest filters remain permitted.

In e-commerce, the line between personalisation and manipulation is fluid. Misjudgements and warning letters are a real risk. In practice, systems must therefore be reviewed for pressure or deception mechanisms, users must be clearly informed, and a “red flag” catalogue for risky design patterns should be created.

Fraud Prevention and Scoring

Many shops use AI for fraud prevention, for example for credit checks or detecting unusual payment patterns. Under Annex III No. 5 AI Act, these systems fall into the high-risk category.

Example: A shop automatically calculates the risk of payment default and automatically excludes certain customer groups from purchase on account. In that case, the high-risk regime applies with extensive compliance obligations: risk management (Art. 9), data quality (Art. 10), technical documentation (Art. 11) and ongoing monitoring are mandatory.

The interface with the GDPR is particularly sensitive. The Court of Justice of the European Union (judgment of 7 December 2023, C-634/21SCHUFA) confirmed that automated scoring can constitute automated individual decision-making within the meaning of Art. 22 GDPR. AI Act and GDPR thus apply in parallel – with extensive transparency and data subject rights.

For online retail this means: fraud prevention must not remain a “black box”. Retailers must document in a comprehensible manner which criteria are used for decisions, how data is processed and who bears responsibility. Only those who understand and document the logic of their AI-driven fraud systems can manage liability risks.

AI-Generated Content

More and more retailers have product descriptions, images or advertising content generated automatically. Under Art. 50(2) AI Act, such content must in future be clearly labelled as AI-generated in order to ensure consumer transparency.

Example: An online shop publishes automatically generated product reviews without labelling them as AI-written. As of August 2026, this would not only breach Art. 50(2) AI Act, but could also constitute misleading conduct under § 5 UWG, as it suggests genuine customer opinion.

Copyright and personality rights also come into focus: if AI-generated images are based on copyrighted material or realistically depict individuals, third-party rights may be infringed. Retailers should document which tools they use, on which data the models are based and what post-editing takes place.

For online shops, an internal procedure for labelling and reviewing AI-generated content is advisable. This should include random checks of generated texts and images as well as clear contractual rules with agencies and service providers on the permissible use of generative AI. This helps prevent automation from unintentionally leading to misleading practices or copyright infringements.

Risks for Retailers: More Than Just Fines

Arts. 99 et seq. AI Act provide for fines of up to EUR 35 million or 7% of global annual turnover. For many e-commerce businesses, such sums are existential.

In practice, the combination with competition law is particularly relevant: breaches of transparency obligations can also be actionable under unfair competition law. This leads to double enforcement pressure – from authorities and competitors.

Practical Recommendations for Retailers

For online retailers the message is clear: waiting is not an option. Implementation of the AI Act is not a future issue but an ongoing compliance task.

Five steps that should be taken now:

  1. Take stock: Which systems contain AI components – knowingly or as part of external tools?
  2. Integrate lawfully: Review contracts with SaaS providers and service providers – who is liable in case of breaches?
  3. Build compliance structures: Appoint responsible persons, define policies, introduce documentation.
  4. Train staff: Explicitly required from February 2025 – and crucial to raise awareness and control liability risks.
  5. Implement transparency: Customers must clearly see where AI is used.

Complement these steps with an internal risk matrix in which each AI tool is assessed according to risk, data source and responsibility. This makes it easier to set priorities for compliance measures.

More Than Just the AI Act: Further Legal Issues Around AI

The use of artificial intelligence is not purely a technical matter. It touches on a wide range of legal areas which together define the framework for “AI compliance”.

Data Protection Law – Where Do the Data Come From?

AI systems only work with training and usage data. Retailers must clarify:

  • Are data collected lawfully (Art. 6 GDPR)?
  • Can they be used for this purpose, or is there a new purpose of processing?
  • Do data subjects need to be informed that their data are fed into an AI system?

When using external AI tools, retailers should check whether data processing is transparent and GDPR-compliant. Otherwise, fines and reputational damage loom.

Copyright and Personality Rights – What Is in the AI Input?

Many AI models are trained on copyrighted texts, images or music. This raises the following questions:

  • May the model be trained with this set of data? (copyright, licensing law)
  • May the generated content be used commercially?

Personality rights are also relevant: where real faces, voices or names are used, the right to one’s own image or voice may be affected.

Retailers should therefore insist on evidence of data provenance – particularly from external providers of generative AI.

Contractual Safeguards – What Do Service Providers Actually Deliver?

Anyone buying AI-driven software or marketing services should clearly define in contracts:

  • Where do the training or input data come from?
  • Who is liable if it later emerges that copyrighted material or personal data were used unlawfully?
  • Which obligations does the provider assume with regard to transparency, labelling or compliance with the AI Act?

Contracts should also stipulate how generated content may be used and whether it is subject to labelling duties.

Internal Policies – How May Employees Use AI?

The use of AI must be regulated internally. Companies should introduce a binding AI policy specifying:

  • which tools are permitted,
  • which data (e.g. customer data, contracts) must not be entered into public AI services,
  • how to deal with AI-generated content (e.g. draft texts, translations, images).

Employees must not experiment independently with sensitive data. This may violate data protection or confidentiality obligations. Training and access rules ensure legally compliant use.

Competition Law and Misleading Practices

If AI-generated content or chatbots create the impression of human communication, this may infringe § 5 UWG (misleading commercial practices).

Unlabelled AI-generated product reviews or advertising may also be unfair.

Before publication, content should therefore be checked to see whether it gives a false impression of human authorship or authenticity.

Liability and Responsibility

Who is liable if an AI system produces incorrect or unlawful results?

The AI Act requires that a responsible natural or legal person is always designated. For companies this means: clear internal responsibilities, written documentation and ongoing system control.

Conclusion: Early Preparation Is Mandatory

The AI Act marks a milestone in European digital law. For e-commerce, it entails a tangible expansion of regulatory obligations – even for retailers who merely use external SaaS tools. Particularly relevant in practice are labelling requirements for chatbots and AI-generated content, the distinction between permissible personalisation and manipulation, and the high-risk classification of scoring systems.

Guidance from the Commission and supervisory authorities is still lacking, but retailers should now start organisational measures – stocktaking, training, documentation, contract review – in order to comply with the phased obligations.

AI compliance goes far beyond the AI Act. It encompasses data protection, copyright and contract law, labour law, competition law and liability. Companies should establish clear contract clauses, internal policies and review processes at an early stage to ensure that the use of AI is safeguarded legally, technically and organisationally.

As a law firm specialising in e-commerce, contract and competition law, we support you in identifying and managing the legal risks of AI use in online retail – from contract review and risk analyses to the development of tailored compliance structures. Our advice combines legal precision with commercial understanding – so that AI creates opportunities rather than risks for your business.

Other insights

Trademark Search and Clearance

Trademark registration DPMA, EUIPO, WIPO

Contact us for more insights