OpenAI says board can overrule Sam Altman’s decisions on new AI
OpenAI’s board of directors can overrule CEO Sam Altman over the release of a new artificial intelligence model — the latest sign the company is prioritizing risk mitigation in the wake of the recent drama surrounding his firing.
The Silicon Valley company released a new set of guidelines on Monday detailing how it plans to minimize potential dangers of newly released AI language models.
“The study of frontier AI risks has fallen far short of what is possible and where we need to be,” the company said in a statement titled “Preparedness.”
OpenAI said its new “preparedness framework” would include processes that “track, evaluate, forecast, and protect against catastrophic risks posed by increasingly powerful models.”
The preparedness team would be tasked with gauging the risks that innovative AI poses in the fields of cybersecurity as well as chemical, nuclear and biological threats.
Aleksander Madry, an MIT professor who is heading the preparedness group, told Bloomberg News that his team will create monthly reports that will be sent to an internal safety advisory group.
The advisory group will consider the findings of Madry’s team and then send its own recommendations to Altman and the board.
Last month, Altman, 38, was fired by the company’s board of directors — only to be reinstated as CEO days later in the face of a widespread employee revolt.
Before his firing, Altman was reportedly at odds with the board over the rapid pace at which OpenAI was commercializing its products and bringing them to market, chief among them ChatGPT.
Members of the board reportedly grew concerned over whether Altman was sufficiently mindful of the risks posed by AI, including the potential threat to jobs that are performed by humans as well as the possible spread of misinformation.
Senior employees at OpenAI complained to the company’s former board of directors that Altman was “psychologically abusive.”
Altman also was accused of “pitting employees against each other in unhealthy ways,” The Washington Post reported, citing two people with knowledge of the board’s thinking.
Aside from the staff complaints, the sources said OpenAI’s board felt that Altman lied to them as part of an effort to oust Helen Toner, a board member and academic focused on AI safety.
Altman reportedly soured on Toner after she contributed to a paper that criticized OpenAI for pushing out ChatGPT, arguing the move accelerated the AI race at the expense of safety.
The potential dangers posed by AI has been a topic of concern among Silicon Valley observers and tech entrepreneurs.
Earlier this year, Elon Musk, a co-founder of OpenAI who left the firm after losing a power struggle, and scores of other tech executives signed an open letter urging a pause in research and development of AI due to fears over “profound risks to society and humanity.”