Image description: A series of featureless figures hovering over a densely populated conference. End ID
Image description: A series of featureless figures hovering over a densely populated conference. End ID

Sam Altman, the CEO and co-founder of OpenAI, the organizational brains of the ChatGPT we know and love, was abruptly pushed out of his own company by a nonprofit board on Nov. 17. After a “deliberative review process, the board cryptically determined that Altman was “not consistently candid” in his communications with the board and

Microsoft Scores Temporary Coup at OpenAI

The CEO of the $80 billion valued startup in tech pushed out by his own board. A misguided narrative of “accelerationism vs. altruism” delivers its experts to Microsoft on a silver platter.

Nov 29, 2023

Sam Altman, the CEO and co-founder of OpenAI, the organizational brains of the ChatGPT we know and love, was abruptly pushed out of his own company by a nonprofit board on Nov. 17. After a “deliberative review process, the board cryptically determined that Altman was “not consistently candid” in his communications with the board and the broader OpenAI team.
The phrase begs for a broader context and a better explanation. It says nothing and everything: something went down, but we don’t want to talk about it. Effective immediately, Chief Technology Officer Mira Murati was appointed interim CEO. OpenAI exists in an awkward, precarious state where it is nominally a non-profit but also has responsibilities to earn profits for investors.
OpenAI has been a breakout success up to this point; ChatGPT is now a household name amongst students and professors. Last month, the company was in talks to become the most valuable startup in San Francisco, at $80 billion or more, in a deal with Thrive Capital. Today, an overwhelming majority of the 770 OpenAI employees threatened to follow Altman to Microsoft, leaving the startup itself as a shell, should the board that fired Altman not resign.
Ultimately, amidst these stakes, Sam Altman is set to rejoin the board — an end to the rollercoaster. But the roots of this saga are worth exploring further. Many members of OpenAI’s board are Effective Altruists, part of a movement focused on the consequences of “artificial general intelligence.” EA proponents typically support a “longtermist” approach to technological development that sees the potential advancement of AI as an existential risk.
Debates over AI safety within the board are likely to have contributed to OpenAI’s shakeup. The consequences of a rogue AI cannot be taken lightly, and OpenAI battled a dual motive of profit and “keeping powerful AI safe and broadly beneficial.” But legitimate concerns about the rapid development of AI have been overshadowed by doomerist thinking from those like Eliezer Yudkowsky who see rogue AI-fueled extinction as the outcome.
OpenAI is not the first era where Effective Altruism has come under intense scrutiny — see Sam Bankman-Fried. But it marks the increasingly inseparable destructive elements of the EA movement, whose good tenets are not so novel. EA leaders may espouse “earning to give,” but make no distinction whether that earning came from defrauding investors in the case of FTX.
Weakening OpenAI may have supported the myopic vision of the “common good” to the board members, but it only stands to paint all AI safety research under the same negative brush. “E/acc,” or effective accelerationism has risen as an innovation-focused retort to EA practitioners’ concerns. Blind spots have been met with more blind spots, with these “accelerationists” using ad hominem rhetoric to paint AI safety supporters as weak and effeminate.
It appeared, temporarily, that the majority of OpenAI’s employees were set to join Altman in heading a new AI division at Microsoft. Microsoft had previously invested $13 billion in the startup for an effective 49% stake. It appeared that the effective absorption of OpenAI and its staff would increase that stake to 100% — without Microsoft handling a single acquisition package or antitrust lawsuit.
How this will continue to shape the future of AI is unclear, though it is certain that EA is far from a flawless movement. Silicon Valley’s moral compass faces further turbulent tests. Will AI development further follow a commercialized, profit-centric approach or will it uphold balance as in OpenAI’s founding charter? The infighting caused by well-intentioned but dogmatic practitioners of a seemingly flawless movement will steer this direction. OpenAI’s unfolding drama reflects a balance between innovation, ethics, and corporate influence that is coming under fire.
Ethan Fulton is Editor-in-Chief. Email them at feedback@thegazelle.org.
gazelle logo