Brittany Roush, Relativity: The White House Addresses Responsible AI: Impacts on the Public Sector

relativity logo

Extract from Brittany Roush’s article “The White House Addresses Responsible AI: Impacts on the Public Sector”

When we first started writing about the “Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence” and its impact on the e-discovery industry, we left the impact from a federal government perspective for last. Things were moving rapidly, and we knew the new year would bring new developments.  

This article covers those developments in the months since the EO was signed. Let’s dig in.

Reporting Requirements Come Due

In the EO, the Commerce Department was given a deadline of January 28, 2024, to devise a plan mandating companies to report specific details about advanced AI models under development to US officials. The order specified that these details should encompass the computing power employed, data ownership details provided to the model, and safety testing information.

Without getting too technical, the Executive Order set a threshold for reporting based on the computing power that goes into training a large language model. While Google and OpenAI have not yet disclosed the computing power used to train their large language models, it is widely thought to be just under the current threshold. To many, these thresholds seem arbitrary and meaningless, especially as it isn’t benchmarked against real models—but Aron Ahmadia, head of applied science at Relativity, takes a slightly different view: “Another interpretation of the EO is not that the White House has set meaningless thresholds, but that they specifically intend to set the threshold at Google’s current system and their next model.”

Read more here