It’s the news of the weekend. And it’s no wonder. We’re talking about the most influential person in the tech world of the past year, Sam Altman, and the most famous artificial intelligence tool on the planet, ChatGPT. No one would think that the CEO of a successful company could be fired overnight.
The sudden and mysterious removal of Sam Altman, CEO of OpenAI, by the company’s board of directors, shocked the world of technology and triggered a frenzied guessing game about the reasons behind the downfall of one of the industry’s biggest stars, at a time when everything seemed to be going in his favor.
And as they say in The New York Times, no one knows for sure what could have happened. In a blog post published on Friday, the company stated that Altman “was not always truthful in his communications” with the board but did not provide further details.
Lack of transparency and working in the shadows
A company-wide meeting held on Friday afternoon at OpenAI didn’t reveal much more. According to internal sources, Ilya Sutskever, the company’s chief scientist and a board member, defended the dismissal.
The accusation goes further, as Sutskever claimed it was necessary to protect OpenAI’s mission of making artificial intelligence beneficial for humanity.
Mr. Altman did not comment on the exact circumstances of his departure on Friday. However, Greg Brockman (co-founder and president of OpenAI, who resigned on Friday in solidarity with the ousted CEO) issued a statement saying that both were “shocked and saddened by what the board did today.”
There will be a lot of palace intrigue in the coming days as the whole story unfolds. But some things are already clear.
What we do know about the dismissal
Primarily, the dismissal was made possible due to OpenAI’s unusual corporate governance structure. OpenAI originated in 2015 as a nonprofit organization and in 2019 established a limited-profit subsidiary, a novel arrangement where investor returns are capped at a certain amount above their initial investment.
However, it retained the nonprofit organization’s mission and granted the nonprofit’s board the power to govern the activities of the limited-profit entity, including the dismissal of the CEO.
Unlike other tech founders who maintain control through dual-class stock structures, Altman doesn’t directly own any shares of OpenAI.
OpenAI’s board holds several distinct features. It’s small (six members before Friday) and includes various AI experts who don’t hold company shares.
Their directors aren’t tasked with maximizing shareholder value, as most corporate boards are, but rather have a fiduciary duty to create safe AGI – artificial general intelligence – “that is broadly beneficial.”
At least two board members, Tasha McCauley and Helen Toner, are associated with the Effective Altruism movement, a utilitarian-inspired group that has propelled research on AI safety and raised concerns that powerful AI systems could potentially lead to human extinction.
Another board member, Adam D’Angelo, serves as the CEO of Quora, a question-and-answer website.
Some of Altman’s friends and allies accused these board members of executing a “coup” on Friday. However, it’s still unclear which board members voted to oust Altman and what motivated their actions.
What we also know about Altman’s dismissal is its potential to shake up the entire tech industry. Altman was one of Silicon Valley’s most well-connected executives, thanks to his years leading the startup accelerator Y Combinator. His connections allowed OpenAI to forge strong ties with other tech companies.
Microsoft, in particular, heavily invested in OpenAI, injecting over $10 billion into the company and providing much of the technical infrastructure relied upon by products like ChatGPT.
The future of AI is the future of humanity
It’s almost certain that Altman’s ousting will fuel the cultural war within the artificial intelligence industry, between those who believe AI should progress faster and those who think it should slow down to prevent potentially catastrophic harm.
This debate, sometimes labeled as “accelerationists” versus “catastrophists,” has intensified in recent months as regulators have begun to encroach upon the AI industry and technology has become more powerful.
Some prominent accelerators have blamed industry safety advocates for inflating the risks of AI to entrench themselves.
Safety advocates, on the other hand, have sounded alarms that OpenAI and other companies are advancing too quickly in building powerful AI systems and are ignoring voices of caution.
And some skeptics have accused these companies of stealing copyrighted works from artists, writers, and others to train their models.
Sam Altman has always strived to strike a balance between optimism and concern, making it clear that he believed artificial intelligence would ultimately be beneficial for humanity, yet he also agreed that it needed guardrails and thoughtful design to keep it safe.
None of this is necessarily related to why Mr. Altman was ousted from his company. But it’s certainly a sign of an impending battle.