Doug Austin: OpenAI is Dissolving Team Focused on Long-Term AI Risks

eDiscovery Today logo

Extract from Doug Austin’s article “OpenAI is Dissolving Team Focused on Long-Term AI Risks”

After OpenAI co-founder Ilya Sutskever resigned last week, CNBC is reporting that OpenAI is dissolving the team he co-led, focused on long-term AI risks.

In the report, (OpenAI dissolves team focused on long-term AI risks, less than one year after announcing it, written by Hayden Field and available here), CNBC states that OpenAI has disbanded its Superalignment team focused on the long-term risks of artificial intelligence just one year after the company announced the group, according to a person familiar with the situation, who spoke on condition of anonymity, and said some of the team members are being reassigned to multiple other teams within the company.

OpenAI’s Superalignment team, announced last year, has focused on “scientific and technical breakthroughs to steer and control AI systems much smarter than us.” At the time, OpenAI said it would commit 20% of its computing power to the initiative over four years.

The news comes days after both team leaders, Sutskever and Jan Leike, announced their departures from the Microsoft-backed startup. Leike on Friday wrote that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

Read more here