The Good, the Braver and the Curious.
As we navigate through 2025, the European legal landscape is undergoing a significant transformation, particularly in the realms of artificial intelligence (AI) regulation and data sovereignty. These changes are reshaping how legal departments and more specifically eDiscovery professionals operate, compelling them to adapt to new compliance requirements and technological advancements.
Following on from our blog post on Navigating eDisclosure in the UK and Practice Direction 57AD, we are now moving on to explore AI regulation in the greater European spectrum, taking a contrasting glance towards the UK and the US as well, at the close of this post.
AI Regulation: The EU AI Act’s ImpactÂ
The European Union’s Artificial Intelligence Act (AI Act), which came into effect in August 2024, has introduced a comprehensive regulatory framework for AI systems.
Before digging any further, it is important to acknowledge the importance and challenging scope of an(y) EU Act due to the fact that it always is required to balance between a unified approach and national variations. The EU AI Act is no exception to that and it establishes a unified regulatory framework for artificial intelligence across the European Union, whilst also allowing for national variations in implementation. Take a look at how different EU jurisdictions may interpret or apply the Act differently:
- National Competent Authorities – Each EU member state will have its own national AI regulator, responsible for enforcing the Act within its jurisdiction. This means compliance approaches may vary depending on local enforcement priorities.Â
- Sector-Specific Regulations – Some countries may layer additional AI rules on top of the EU AI Act, particularly in industries like finance, healthcare, and defense, where national laws already impose strict oversight.Â
- Risk-Based Approach – The Act categorizes AI systems into prohibited, high-risk, and low-risk categories. However, member states may interpret risk differently, leading to stricter or more lenient enforcement in certain regions.Â
- Data Protection & Privacy Laws – While the EU AI Act aligns with GDPR, some countries—like Germany—have stronger national privacy laws, which could lead to stricter AI governance in those jurisdictions.Â
- Innovation vs. Regulation Balance – Some EU countries, such as France and the Netherlands, may adopt a more innovation-friendly approach, focusing on AI development incentives, while others may prioritize consumer protection and ethical AI.
This legislation adopted a very sensible and comprehensive approach to all legal entities. It categorizes AI applications on risk levels—unacceptable, high, limited, and minimal—and imposes corresponding obligations on providers and users. High-risk AI systems, such as those used in legal decision-making or biometric identification, are subject to stringent requirements, including conformity assessments and transparency obligations.
eDiscovery – New horizons and what does the EU AI Act meanÂ
Bringing it closer to home, on a practical level for eDiscovery professionals, this means that AI tools employed in document review and data analysis overall must comply with the EU AI Act’s provisions. Ensuring that these tools meet the necessary standards is crucial to avoid potential fines, which can reach up to €35 million or 7% of the company’s global annual turnover, whichever is higher. Now, this is all dependent on the jurisdiction that the data
Digging deeper, document review and data analysis must comply with the EU AI Act, particularly when they involve high-risk AI systems. The Act sets strict requirements for data governance, transparency, and bias mitigation in AI-driven processes.
For document review, AI systems used in legal, financial, or regulatory settings must ensure data accuracy, fairness, and human oversight. The Act mandates that AI models trained on data must follow quality standards, including error-free, representative, and unbiased datasets.
For data analysis, compliance depends on the risk level of the AI system. If the analysis involves automated decision-making in areas like employment, law enforcement, or healthcare, it falls under high-risk AI and must meet strict documentation and monitoring requirements.
Location location locationÂ
The location of the data and the location of the document review both matter under the EU AI Act, particularly for high-risk AI systems.
Data Location & GovernanceÂ
- The Act requires AI systems to be trained on high-quality, representative, and unbiased datasets.
- If the data is stored outside the EU, it must comply with EU data protection laws, including GDPR.
- AI providers must ensure data governance practices that align with EU standards, regardless of where the data is processed.
Document Review LocationÂ
- If AI-assisted document review occurs outside the EU, it may still be subject to the Act if the AI system is used within the EU.
- Companies conducting cross-border document review must ensure compliance with EU transparency and accountability requirements.
- AI systems used for legal or regulatory document analysis must meet strict documentation and monitoring standards.
One would then ask: How do we ensure compliance with the EU AI Act across multiple jurisdictions?Â
The answer is multi-faceted and requires careful planning, especially for businesses operating internationally, either as end clients or eDiscovery service providers. Strategy must cover:
Understanding of Extra-Territorial ScopeÂ
- The EU AI Act applies beyond the EU if AI systems are used in the EU, even if developed or deployed elsewhere.
- Companies outside the EU must ensure their AI models comply if their outputs are intended for EU users.
Alignment with GDPR & Data GovernanceÂ
- Since the EU AI Act aligns with GDPR, businesses must ensure data protection, transparency, and accountability.
- AI systems processing personal data must follow GDPR principles, including data minimization and user consent.
Classification of AI Systems by Risk LevelÂ
- Businesses must categorize AI systems under the Act’s risk-based framework.
- High-risk AI systems (e.g., those used in legal, financial, or healthcare sectors) require strict documentation and monitoring.
Establishing Cross-Border Compliance TeamsÂ
- Companies should create AI compliance teams to ensure consistent implementation across different jurisdictions.
- Legal, IT, and compliance experts should collaborate to align AI governance with EU and non-EU regulations.
Monitoring of Regulatory UpdatesÂ
- The EU AI Act is evolving, and businesses must stay updated on new enforcement timelines.
- Some provisions may be delayed or adjusted, affecting compliance strategies.
Data Sovereignty and Cross-Border Transfers with the USÂ
Data sovereignty has become a focal point in the EU, especially following the invalidation of the EU-US Privacy Shield framework by the Court of Justice of the European Union in the Schrems II decision. Organizations transferring personal data outside the EU must now rely on mechanisms such as Standard Contractual Clauses (SCCs) and conduct Transfer Impact Assessments (TIAs) to ensure the adequate protection of data subjects’ rights.
Legal departments must be vigilant in mapping data flows and assessing third-country laws to maintain compliance. This is particularly pertinent for eDiscovery processes that involve cross-border data transfers, necessitating a thorough understanding of both EU regulations and the legal frameworks of recipient countries.
GDPR Enforcement IntensifiesÂ
The General Data Protection Regulation (GDPR) continues to be rigorously enforced across the EU. In 2025, regulators have intensified their scrutiny, with significant fines imposed for non-compliance. Notably, the cumulative total of GDPR fines has reached approximately €5.88 billion, highlighting the importance of robust data protection measures.
Legal professionals must ensure that their eDiscovery practices align with GDPR requirements, including data minimization, purpose limitation, and the rights of data subjects. Failure to do so can result in substantial financial and reputational damage.
Strategic Adaptation for Legal DepartmentsÂ
To navigate this evolving landscape, legal departments should adopt a proactive approach:
- Audit AI Tools: Evaluate AI systems used in legal processes to ensure they meet the AI Act’s compliance standards.
- Enhance Data Governance: Implement comprehensive data governance frameworks that address data sovereignty concerns and facilitate compliance with cross-border transfer requirements.
- Strengthen GDPR Compliance: Regularly review and update data protection policies and procedures to align with GDPR obligations and mitigate enforcement risks.
By embracing these strategies, legal departments can not only achieve compliance but also leverage technological advancements to enhance efficiency and effectiveness in their operations.
Beyond the strict EU lensÂ
The UK, EU, and US have taken distinct approaches to AI regulation, reflecting their priorities and governance styles. A comprehensive outline below will attempt to capture the key objectives and key differences between the three governing forces in eDiscovery.
United Kingdom (UK)Â
- The UK has adopted a pro-innovation approach, avoiding strict AI-specific laws. Instead, existing regulators (e.g., the Competition and Markets Authority and Financial Conduct Authority) oversee AI within their sectors.
- A Private Members’ Bill (Artificial Intelligence Regulation Bill) was reintroduced in March 2025, proposing a central AI authority, which would bring the UK closer to the EU’s approach.
- The UK government has emphasized flexibility, allowing businesses to innovate while ensuring AI aligns with ethical and safety standards.
European Union (EU)Â
- The EU AI Act (adopted in May 2024) is a comprehensive, risk-based framework that categorizes AI systems into prohibited, high-risk, and low-risk groups.
- The Act establishes national AI regulators in each member state and a central European AI Board to ensure compliance.
- The EU prioritizes human rights, systemic risks, and transparency, making its approach more prescriptive than the UK’s.
United States (US)Â
- The US has taken a decentralized approach, relying on sector-specific regulations rather than a single AI law.
- The Biden Administration’s Executive Order on AI focuses on AI safety, national security, and fairness, but enforcement varies across agencies.
- The US initially pursued AI regulation, but recent shifts have leaned toward deregulation, emphasizing innovation and competition.
Key DifferencesÂ
- The EU AI Act is strict and centralized, while the UK favors flexibility and sector-led oversight.
- The US has no single AI law, relying on agency-specific rules and market-driven governance.
- The UK may align more closely with the EU if its proposed AI authority is indeed established.
For further information on the EU AI Act and its implications, refer to the official documentation here: EU AI Act – EUR-Lex. Interested in more in-depth discussions, contact Maribel or Melina or ACEDS UK Chapter.