AI expert Gary Marcus has been following the turmoil at OpenAI with interest this weekend. And, as he wrote Sunday, he “feels sick to his stomach.”
On Friday, OpenAI’s board shocked investors and employees alike by firing CEO Sam Altman. But it now appears likely that Altman will return to his post, board members will be pushed out, and the board’s decision will be undone, according to Bloomberg.
Marcus wrote about the situation on his Substack, sharing an analysis written by Fortune’s Jeremy Kahn earlier in the day. Whatever the reasons the board had—it’s stated reasons were vague—it’s not a good sign if it’s easily overpowered, believes Marcus, an emeritus professor of psychology and neural science at New York University and host of the Humans vs. Machines podcast.
While OpenAI began as a nonprofit in 2015, four years later Altman, shortly after becoming CEO, created a commercial arm—which was governed by the nonprofit parent. Altman, unusually, had no equity in the company. That lessened his influence with the board, which, as he frequently noted, had the power to fire him.
“No one person should be trusted here,” he told Bloomberg this summer. “The board can fire me. I think that’s important.”
In OpenAI’s unusual structure, a board “with no financial interest was supposed to look out for humanity,” Marcus wrote. “The spirit of the original arrangement was that everything that the for-profit did was supposed to be in the service of the non-profit.”
Indeed, the board was meant to have control over the capped-profit company, with an eye on the broader mission: to ensure that safe artificial general intelligence (AGI) “is developed and benefits all of humanity.” AGI refers a system that can match humans when faced with an unfamiliar task.
So even if it’s Microsoft’s big money and computing resources that keep OpenAI going—the software giant has committed at least $13 billion to OpenAI but so far only delivered some of that—the nonprofit board ostensibly was still in control.
But as Kahn wrote, “the structure was basically a time bomb. By turning to a single corporate entity, Microsoft, for the majority of the cash and computing power OpenAI needed to achieve its mission, it was essentially handling control to Microsoft, even if that control wasn’t codified in any formal governance mechanism.”
When faced with the potential financial repercussions of Altman’s removal, “the nominally subordinate for-profit (both employees and investors) quickly set to work to push out the board and to undo its decisions,” Marcus wrote. “All signs are that those financially-interested stakeholders will quickly emerge victorious.”
Bloomberg and others have reported that investors are working to reinstate Altman, and that his return could spell changes to the board that fired him. Altman has told investors that if he does return to OpenAI, he wants a new board and governance structure, according to the Wall Street Journal.
“The tail thus appears to have wagged the dog—potentially imperiling the original mission, if there was any substance at all to the Board’s concerns,”wrote Marcus. “If you think that OpenAI has a shot, eventually, at AGI, none of this bodes particularly well.”