Olga V. Mack: Mapping Progress: How Attorneys Can Prepare To Advise Clients On AI Compliance

Extract from Olga V. Mack’s article “Mapping Progress: How Attorneys Can Prepare To Advise Clients On AI Compliance”

2023 is seen by some as the time when the Wild Wild West of the AI frontiers reigned. If that’s the case, then we can look to 2024 and beyond as the time when a new sheriff came to town: the EU AI Act. The excitement and frenzy that came from OpenAI’s ChatGPT was soon tempered by governing bodies looking to what sort of frameworks might be needed to ensure the safe and ethical use of these AI systems. Efforts to finalize and pass the EU AI Act were well on their way before the hype around generative AI took off, but that same hype likely proved pivotal toward passing such a wide-sweeping piece of legislation. 

Kassi Burns, senior attorney at King & Spalding, author of several publications on AI (including some co-authored with myself), and independent producer of her podcast on AI and emerging technologies (Kassi &), joins me to dive into what attorneys need to know about AI governance to not get lost in the dust.  

Olga V. Mack: What are key elements that every attorney here in the U.S. should know about the EU AI Act?

Kassi Burns: First, the EU AI Act is focused on the protection of health, safety, and fundamental rights of individuals against the harmful effects of AI systems. This is meant to be accomplished with a risk classification pyramid for AI systems, with varying degrees of requirements depending on the level of risk. Those levels of risk being: unacceptable risk, high risk, transparency risk, or permitted/no risk. 

Read more here

ACEDS