[ad_1]
It’s not every day that the most talked-about company in the world sets itself on fire. Yet that seems to be what happened Friday, when OpenAI’s board announced that it had terminated its chief executive, Sam Altman, because he had not been “consistently candid in his communications with the board.” In corporate-speak, those are fighting words about as barbed as they come: They insinuated that Altman had been lying.
The sacking set in motion a dizzying sequence of events that kept the tech industry glued to its social feeds all weekend: First, it wiped $48 billion off the valuation of Microsoft, OpenAI’s biggest partner. Speculation about malfeasance swirled, but employees, Silicon Valley stalwarts and investors rallied around Altman, and the next day talks were being held to bring him back. Instead of some fiery scandal, reporting indicated that this was at core a dispute over whether Altman was building and selling AI responsibly. By Monday, talks had failed, a majority of OpenAI employees were threatening to resign, and Altman announced he was joining Microsoft.
All the while, something else went up in flames: the fiction that anything other than the profit motive is going to govern how AI gets developed and deployed. Concerns about “AI safety” are going to be steamrolled by the tech giants itching to tap in to a new revenue stream every time.
It’s hard to overstate how wild this whole saga is. In a year when artificial intelligence has towered over the business world, OpenAI, with its ubiquitous ChatGPT and Dall-E products, has been the center of the universe. And Altman was its world-beating spokesman. In fact, he’s been the most prominent spokesperson for AI, period.
For a high-flying company’s own board to dump a CEO of such stature on a random Friday, with no warning or previous sign that anything serious was amiss — Altman had just taken center stage to announce the launch of OpenAI’s app store in a much-watched conference — is almost unheard of. (Many have compared the events to Apple’s famous 1985 canning of Steve Jobs, but even that was after the Lisa and the Macintosh failed to live up to sales expectations, not, like, during the peak success of the Apple II.)
So what on earth is going on?
Well, the first thing that’s important to know is that OpenAI’s board is, by design, differently constituted than that of most corporations — it’s a nonprofit organization structured to safeguard the development of AI as opposed to maximizing profitability. Most boards are tasked with ensuring their CEOs are best serving the financial interests of the company; OpenAI’s board is tasked with ensuring their CEO is not being reckless with the development of artificial intelligence and is acting in the best interests of “humanity.” This nonprofit board controls the for-profit company OpenAI.
Got it?
As Jeremy Khan put it at Fortune, “OpenAI’s structure was designed to enable OpenAI to raise the tens or even hundreds of billions of dollars it would need to succeed in its mission of building artificial general intelligence (AGI) … while at the same time preventing capitalist forces, and in particular a single tech giant, from controlling AGI.” And yet, Khan notes, as soon as Altman inked a $1-billion deal with Microsoft in 2019, “the structure was basically a time bomb.” The ticking got louder when Microsoft sunk $10 billion more into OpenAI in January of this year.
We still don’t know what exactly the board meant by saying Altman wasn’t “consistently candid in his communications.” But the reporting has focused on the growing schism between the science arm of the company, led by co-founder, chief scientist and board member Ilya Sutskever, and the commercial arm, led by Altman.
We do know that Altman has been in expansion mode lately, seeking billions in new investment from Middle Eastern sovereign wealth funds to start a chip company to rival AI chipmaker Nvidia, and a billion more from Softbank for a venture with former Apple design chief Jony Ive to develop AI-focused hardware. And that’s on top of launching the aforementioned OpenAI app store to third party developers, which would allow anyone to build custom AIs and sell them on the company’s marketplace.
The working narrative now seems to be that Altman’s expansionist mind-set and his drive to commercialize AI — and perhaps there’s more we don’t know yet on this score — clashed with the Sutskever faction, who had become concerned that the company they co-founded was moving too fast. At least two of the board’s members are aligned with the so-called effective altruism movement, which sees AI as a potentially catastrophic force that could destroy humanity.
The board decided that Altman’s behavior violated the board’s mandate. But they also (somehow, wildly) seem to have failed to anticipate how much blowback they would get for firing Altman. And that blowback has come at gale-force strength; OpenAI employees and Silicon Valley power players such as Airbnb’s Brian Chesky and Eric Schmidt spent the weekend “I am Spartacus”-ing Altman.
It’s not hard to see why. OpenAI had been in talks to sell shares to investors at an $86-billion valuation. Microsoft, which has invested over $11 billion in OpenAI and now uses OpenAI’s tech on its platforms, was apparently informed of the board’s decision to fire Altman five minutes before the wider world. Its leadership was furious and seemingly led the effort to have Altman reinstated.
But beyond all that lurked the question of whether there should really be any safeguards to the AI development model favored by Silicon Valley’s prime movers; whether a board should be able to remove a founder they believe is not acting in the interest of humanity — which, again, is their stated mission — or whether it should seek relentless expansion and scale.
See, even though the OpenAI board has quickly become the de facto villain in this story, as the venture capital analyst Eric Newcomer pointed out, we should maybe take its decision seriously. Firing Altman was not likely a call they made lightly, and just because they’re scrambling now because it turns out that call was an existential financial threat to the company does not mean their concerns were baseless. Far from it.
In fact, however this plays out, it has already succeeded in underlining how aggressively Altman has been pursuing business interests. For most tech titans, this would be a “well, duh” situation, but Altman has fastidiously cultivated an aura of a burdened guru warning the world of great disruptive changes. Recall those sheepdog eyes in the congressional hearings a few months back where he begged for the industry to be regulated, lest it become too powerful? Altman’s whole shtick is that he’s a weary messenger seeking to prepare the ground for responsible uses of AI that benefit humanity — yet he’s circling the globe lining up investors wherever he can, doing all he seemingly can to capitalize on this moment of intense AI interest.
To those who’ve been watching closely, this has always been something of an act — weeks after those hearings, after all, Altman fought real-world regulations that the European Union was seeking to impose on AI deployment. And we forget that OpenAI was originally founded as a nonprofit that claimed to be bent on operating with the utmost transparency — before Altman steered it into a for-profit company that keeps its models secret.
Now, I don’t believe for a second that AI is on the cusp of becoming powerful enough to destroy mankind — I think that’s some in Silicon Valley (including OpenAI’s new interim CEO, Emmett Shear) getting carried away with a science fictional sense of self-importance, and a uniquely canny marketing tactic — but I do think there is a litany of harms and dangers that can be caused by AI in the shorter term. And AI safety concerns getting so thoroughly rolled at the snap of the Valley’s fingers is not something to cheer.
You’d like to believe that executives at AI-building companies who think there’s significant risk of global catastrophe here couldn’t be sidelined simply because Microsoft lost some stock value. But that’s where we are.
Sam Altman is first and foremost a pitchman for the year’s biggest tech products. No one’s quite sure how useful or interesting most of those products will be in the long run, and they’re not making a lot of money at the moment — so most of the value is bound up in the pitchman himself. Investors, OpenAI employees and partners such as Microsoft need Altman traveling the world telling everyone how AI is going to eclipse human intelligence any day now much more than it needs, say, a high-functioning chatbot.
Which is why, more than anything, this winds up being a coup for Microsoft. Now they’ve got Altman in-house, where he can cheerlead for AI and make deals to his heart’s content. They still have OpenAI’s tech licensed, and OpenAI will need Microsoft more than ever.
Now, it may yet turn out to be that this was nothing but a power struggle among board members, and it was a coup that went wrong. But if it turns out that the board had real worries and articulated them to Altman to no avail, no matter how you feel about the AI safety issue we should be concerned about this outcome: a further consolidation of power of one of the biggest tech companies and less accountability for the product than ever.
If anyone still believes a company can steward the development of a product like AI without taking marching orders from Big Tech, I hope they’re disabused of this fiction by the Altman debacle. The reality is, no matter whatever other input may be offered to the company behind ChatGPT, the output will be the same: Money talks.
[ad_2]
Source link