OpenAI staff warn against culture of risk and retaliation

Blog Main Image
June 5, 2024

A group of current and former OpenAI employees issued a public letter warning that the company and its competitors are developing artificial intelligence at unnecessary risks, without adequate oversight, and while silencing employees who might observe irresponsible activity.

Risks of AI development

The letter highlights several risks associated with the current development of artificial intelligence. These range from the further entrenchment of existing inequalities, to manipulation and disinformation, to the loss of control over autonomous AI systems, which could potentially result in the extinction of humanity. “As long as there is no effective government oversight of these companies, current and former employees are among the few people who can hold them accountable,” the letter reads.

Call for whistleblower protection

The letter calls on not only OpenAI, but all AI companies to commit to not punishing employees who speak out about their activities. It also calls on companies to create “verifiable” ways for employees to provide anonymous feedback about their activities. “Ordinary whistleblower protection is insufficient because they focus on illegal activities, while many of the risks we are concerned about are not yet regulated,” the letter says.

Criticism of OpenAI's policies

OpenAI came under criticism last month after an article by Vox revealed that the company had threatened to recover employees' share capital if they did not sign nondisclosure agreements that prohibit them from criticizing the company or even mentioning the existence of such an agreement. OpenAI's CEO, Sam Altman, recently told X that he was not aware of such arrangements and that the company had never recovered anyone's share capital. Altman also said that the clause would be removed, leaving employees free to speak out.

Safety Management Changes

OpenAI has also recently changed its approach to safety management. Last month, an OpenAI research group responsible for assessing and countering the long-term risks of the company's more powerful AI models was effectively disbanded after several prominent figures left and the team's remaining members were included in other groups. A few weeks later, the company announced that it had created a Safety and Security Committee, led by Altman and other board members.

Reactions from stakeholders

The signatories to the letter include people who worked on safety and governance at OpenAI, current employees who signed anonymously, and researchers who currently work at competing AI companies. The letter was also endorsed by several well-known AI researchers, including Geoffrey Hinton and Yoshua Bengio, who both won the Turing Award for cutting-edge AI research, and Stuart Russell, a leading expert in AI safety.

Former employees who signed the letter include William Saunders, Carroll Wainwright, and Daniel Ziegler, all of whom worked on AI safety at OpenAI. “The general public is currently underestimating the speed at which this technology is evolving,” says Jacob Hilton, a researcher who previously worked on reinforcement learning at OpenAI and left the company more than a year ago to pursue a new research opportunity. Hilton says that while companies like OpenAI are committed to building AI securely, there is little oversight to make sure that actually happens.

“The protection we're asking for is intended to apply to all leading AI companies, not just OpenAI,” he says. “I left because I lost faith that OpenAI would act responsibly,” says Daniel Kokotajlo, a researcher who previously worked on AI governance at OpenAI. “Things happened that I think should have been made public,” he adds, without giving specific details. Kokotajlo says the letter's proposal would provide greater transparency, and he believes there is a good chance that OpenAI and others will reform their policies given the negative response to the news about nondisclosure agreements. He also says that AI is evolving at a worrying rate. “The stakes will be much, much, much higher in the coming years,” he says, “or so I believe.”