Data Sovereignty Map

The EU’s Legal Tech Tipping Point – AI Regulation, Data Sovereignty, and eDiscovery in 2025

Share this article

The Good, the Braver and the Curious. 

As we navigate through 2025, the European legal landscape is undergoing a significant transformation, particularly in the realms of artificial intelligence (AI) regulation and data sovereignty. These changes are reshaping how legal departments and more specifically eDiscovery professionals operate, compelling them to adapt to new compliance requirements and technological advancements. 

Following on from our blog post on Navigating eDisclosure in the UK and Practice Direction 57AD, we are now moving on to explore AI regulation in the greater European spectrum, taking a contrasting glance towards the UK and the US as well, at the close of this post. 

AI Regulation: The EU AI Act’s Impact 

The European Union’s Artificial Intelligence Act (AI Act), which came into effect in August 2024, has introduced a comprehensive regulatory framework for AI systems.  

Before digging any further, it is important to acknowledge the importance and challenging scope of an(y) EU Act due to the fact that it always is required to balance between a unified approach and national variations. The EU AI Act is no exception to that and it establishes a unified regulatory framework for artificial intelligence across the European Union, whilst also allowing for national variations in implementation. Take a look at how different EU jurisdictions may interpret or apply the Act differently: 

  1. National Competent Authorities – Each EU member state will have its own national AI regulator, responsible for enforcing the Act within its jurisdiction. This means compliance approaches may vary depending on local enforcement priorities. 
  1. Sector-Specific Regulations – Some countries may layer additional AI rules on top of the EU AI Act, particularly in industries like finance, healthcare, and defense, where national laws already impose strict oversight. 
  1. Risk-Based Approach – The Act categorizes AI systems into prohibited, high-risk, and low-risk categories. However, member states may interpret risk differently, leading to stricter or more lenient enforcement in certain regions. 
  1. Data Protection & Privacy Laws – While the EU AI Act aligns with GDPR, some countries—like Germany—have stronger national privacy laws, which could lead to stricter AI governance in those jurisdictions. 
  1. Innovation vs. Regulation Balance – Some EU countries, such as France and the Netherlands, may adopt a more innovation-friendly approach, focusing on AI development incentives, while others may prioritize consumer protection and ethical AI

This legislation adopted a very sensible and comprehensive approach to all legal entities. It categorizes AI applications on risk levels—unacceptable, high, limited, and minimal—and imposes corresponding obligations on providers and users. High-risk AI systems, such as those used in legal decision-making or biometric identification, are subject to stringent requirements, including conformity assessments and transparency obligations. 

eDiscovery – New horizons and what does the EU AI Act mean 

Bringing it closer to home, on a practical level for eDiscovery professionals, this means that AI tools employed in document review and data analysis overall must comply with the EU AI Act’s provisions. Ensuring that these tools meet the necessary standards is crucial to avoid potential fines, which can reach up to €35 million or 7% of the company’s global annual turnover, whichever is higher. Now, this is all dependent on the jurisdiction that the data  

Digging deeper, document review and data analysis must comply with the EU AI Act, particularly when they involve high-risk AI systems. The Act sets strict requirements for data governance, transparency, and bias mitigation in AI-driven processes. 

For document review, AI systems used in legal, financial, or regulatory settings must ensure data accuracy, fairness, and human oversight. The Act mandates that AI models trained on data must follow quality standards, including error-free, representative, and unbiased datasets. 

For data analysis, compliance depends on the risk level of the AI system. If the analysis involves automated decision-making in areas like employment, law enforcement, or healthcare, it falls under high-risk AI and must meet strict documentation and monitoring requirements. 

Location location location 

The location of the data and the location of the document review both matter under the EU AI Act, particularly for high-risk AI systems. 

Data Location & Governance 

  • The Act requires AI systems to be trained on high-quality, representative, and unbiased datasets. 
  • If the data is stored outside the EU, it must comply with EU data protection laws, including GDPR. 
  • AI providers must ensure data governance practices that align with EU standards, regardless of where the data is processed. 

Document Review Location 

  • If AI-assisted document review occurs outside the EU, it may still be subject to the Act if the AI system is used within the EU
  • Companies conducting cross-border document review must ensure compliance with EU transparency and accountability requirements. 
  • AI systems used for legal or regulatory document analysis must meet strict documentation and monitoring standards. 

One would then ask: How do we ensure compliance with the EU AI Act across multiple jurisdictions? 

The answer is multi-faceted and requires careful planning, especially for businesses operating internationally, either as end clients or eDiscovery service providers. Strategy must cover: 

Understanding of Extra-Territorial Scope 

  • The EU AI Act applies beyond the EU if AI systems are used in the EU, even if developed or deployed elsewhere. 
  • Companies outside the EU must ensure their AI models comply if their outputs are intended for EU users

Alignment with GDPR & Data Governance 

  • Since the EU AI Act aligns with GDPR, businesses must ensure data protection, transparency, and accountability
  • AI systems processing personal data must follow GDPR principles, including data minimization and user consent

Classification of AI Systems by Risk Level 

  • Businesses must categorize AI systems under the Act’s risk-based framework
  • High-risk AI systems (e.g., those used in legal, financial, or healthcare sectors) require strict documentation and monitoring

Establishing Cross-Border Compliance Teams 

  • Companies should create AI compliance teams to ensure consistent implementation across different jurisdictions. 
  • Legal, IT, and compliance experts should collaborate to align AI governance with EU and non-EU regulations

Monitoring of Regulatory Updates 

  • The EU AI Act is evolving, and businesses must stay updated on new enforcement timelines
  • Some provisions may be delayed or adjusted, affecting compliance strategies. 

Data Sovereignty and Cross-Border Transfers with the US 

Data sovereignty has become a focal point in the EU, especially following the invalidation of the EU-US Privacy Shield framework by the Court of Justice of the European Union in the Schrems II decision. Organizations transferring personal data outside the EU must now rely on mechanisms such as Standard Contractual Clauses (SCCs) and conduct Transfer Impact Assessments (TIAs) to ensure the adequate protection of data subjects’ rights. 

Legal departments must be vigilant in mapping data flows and assessing third-country laws to maintain compliance. This is particularly pertinent for eDiscovery processes that involve cross-border data transfers, necessitating a thorough understanding of both EU regulations and the legal frameworks of recipient countries. 

GDPR Enforcement Intensifies 

The General Data Protection Regulation (GDPR) continues to be rigorously enforced across the EU. In 2025, regulators have intensified their scrutiny, with significant fines imposed for non-compliance. Notably, the cumulative total of GDPR fines has reached approximately €5.88 billion, highlighting the importance of robust data protection measures. 

Legal professionals must ensure that their eDiscovery practices align with GDPR requirements, including data minimization, purpose limitation, and the rights of data subjects. Failure to do so can result in substantial financial and reputational damage. 

Strategic Adaptation for Legal Departments 

To navigate this evolving landscape, legal departments should adopt a proactive approach: 

  • Audit AI Tools: Evaluate AI systems used in legal processes to ensure they meet the AI Act’s compliance standards.  
  • Enhance Data Governance: Implement comprehensive data governance frameworks that address data sovereignty concerns and facilitate compliance with cross-border transfer requirements. 
  • Strengthen GDPR Compliance: Regularly review and update data protection policies and procedures to align with GDPR obligations and mitigate enforcement risks. 

By embracing these strategies, legal departments can not only achieve compliance but also leverage technological advancements to enhance efficiency and effectiveness in their operations. 

Beyond the strict EU lens 

The UK, EU, and US have taken distinct approaches to AI regulation, reflecting their priorities and governance styles. A comprehensive outline below will attempt to capture the key objectives and key differences between the three governing forces in eDiscovery.  

United Kingdom (UK) 

  • The UK has adopted a pro-innovation approach, avoiding strict AI-specific laws. Instead, existing regulators (e.g., the Competition and Markets Authority and Financial Conduct Authority) oversee AI within their sectors. 
  • A Private Members’ Bill (Artificial Intelligence Regulation Bill) was reintroduced in March 2025, proposing a central AI authority, which would bring the UK closer to the EU’s approach. 
  • The UK government has emphasized flexibility, allowing businesses to innovate while ensuring AI aligns with ethical and safety standards

European Union (EU) 

  • The EU AI Act (adopted in May 2024) is a comprehensive, risk-based framework that categorizes AI systems into prohibited, high-risk, and low-risk groups. 
  • The Act establishes national AI regulators in each member state and a central European AI Board to ensure compliance. 
  • The EU prioritizes human rights, systemic risks, and transparency, making its approach more prescriptive than the UK’s. 

United States (US) 

  • The US has taken a decentralized approach, relying on sector-specific regulations rather than a single AI law. 
  • The Biden Administration’s Executive Order on AI focuses on AI safety, national security, and fairness, but enforcement varies across agencies. 
  • The US initially pursued AI regulation, but recent shifts have leaned toward deregulation, emphasizing innovation and competition

Key Differences 

  • The EU AI Act is strict and centralized, while the UK favors flexibility and sector-led oversight
  • The US has no single AI law, relying on agency-specific rules and market-driven governance
  • The UK may align more closely with the EU if its proposed AI authority is indeed established. 

For further information on the EU AI Act and its implications, refer to the official documentation here: EU AI Act – EUR-Lex. Interested in more in-depth discussions, contact Maribel or Melina or ACEDS UK Chapter 

Maribel Rivera on Email
Maribel Rivera
VP, Strategy and Client Engagement at ACEDS
As Vice President of Strategy and Client Engagement at ACEDS, Maribel is responsible for local chapter, membership, event management, and strategic partner engagement. A seasoned professional who has helped brands and businesses connect with their audiences and achieve their goals, her breadth of experience, strategic and creative abilities unlock innovation and bring business ideas to life. Prior to ACEDS, she consulted for a variety of private clients in technology, education, and recruiting, crafting and leading marketing and operations solutions for small and mid-sized companies. She also worked as director of sales operations for Fronteo USA Inc. An active member of Women in eDiscovery and ARMA Metro NYC, she also devotes time to charitable work. She speaks regularly on marketing and diversity and inclusion. When she isn’t working, Maribel enjoys traveling, reading, education and working out. Reach her at [email protected].
Melina Efstathiou on Email
Melina Efstathiou
LDI Architect - Disputes & Investigations at Legal Data Intelligence
Melina Efstathiou served as the Acting Head of Litigation Technology at Eversheds Sutherland for six years, with jurisdiction over EMEA and APAC. Among a vast array of legal cases, she led complex investigations, cross-jurisdictional disclosure exercises, public inquiries, arbitration cases, and compliance reviews. With over 15 years of experience in legal technology, Melina specializes in Artificial Intelligence Strategy, Data and Information Governance, Data Privacy, and Forensic Technology, covering the entire EDRM and everything west and east of it. She is an expert in navigating high-stakes legal matters with precision and also has a wealth of experience in legal project management. Before transitioning into legal tech, Melina was a criminal defense solicitor, specializing in financial crime and complex fraud investigations. Passionate about innovation, she leverages cutting-edge technology to enhance legal processes and efficiency.

Share this article