Worker voice | Employee criticisms & resignations prompt OpenAI to announce safety committee

Employee criticisms & resignations prompt OpenAI to announce safety committee

OpenAI confirmed on Tuesday it has established a safety and security committee after employees hit back at the company for dissolving an AI safety team.

In a statement shared on its blog page, OpenAI confirmed its new “Safety and Security Committee” would be led by Sam Altman, CEO, and three other board members.

The past month has seen the resignation of two high-profile executives, including Jan Leike, a safety researcher at OpenAI.

Leike quit his role after criticizing the company for underinvesting in AI safety work, claiming “safety culture and processes have taken a backseat to shiny products” and that tensions with OpenAI’s leadership had “reached a breaking point.”

OpenAI earlier dismantled its “superalignment” team, which is tasked with ensuring OpenAI’s products and technologies meet human needs and priorities.

The company argued disbanding the team and distributing its members across the company would better achieve the team’s purpose.

However, employees on the team did not appear to share the sentiment. Ilya Sutskever, another high-profile executive who worked on the superalignment team, also quit his role earlier in May.

OpenAI, apparently keen to curb criticism from employees and ensure safety concerns are addressed, has also included technical and policy experts including Aleksander Madry, Head of Preparedness; Lilian Weng, Head of Safety Systems; John Schulman, Head of Alignment Science; Matt Knight, Head of Security; and Jakub Pachocki, Chief Scientist.

In a highly competitive market for AI talent, the loss of two leading employees seems to have prompted the company to take more action on the safety issues that staff are concerned by.

In the blog post, OpenAI said the committee’s first task will be to conduct an evaluation of OpenAI’s processes and safeguards over the next 90 days, before sharing recommendations with the entire OpenAI Board. evaluate and further develop OpenAI’s processes and safeguards over the next 90 days.

After a review, OpenAI will then share public updates on the recommendations it is adopting.

The tech company also says it will “retain and consult with other safety, security, and technical experts to support this work, including former cybersecurity officials.”

You are currently previewing this article.

This is the last preview available to you for the next 30 days.

To access more news, features, columns and opinions every day, create a free myGrapevine account.