Why all the coverage of OpenAI's management shenanigans?

So I’m pretty bemused but the intense coverage about the goings on at OpenAI both in the technology press and the mainstream press (I was getting push notifications about it on the BBC app FFS!)

I’ve been working in tech for 25 years at huge multinationals and startups this kind of thing seems completely routine to me. I can’t tell you how many times i’ve the heard our CEO give a rousing speech at an.all hands about how we are going move forward together, cos we are a great team, with a wonderful culture, yada, yada, yada. Only to jump ship or to be fired shortly afterwards out of the blue. Why am I hearing about this like it was a coup d’etat in a major country not a routine (or at the at least not completely extraordinary) management shake up at a medium sized company?

It’s not like OpenAI are a particularly large company, we aren’t talking Apple or Google here. I get they are are the “tip of the spear” of the AI zeitgeist with ChatGPT, but still they made about a quarter of the revenue of the company I work for, and BBC aren’t sending me push notifications for a blow-by-blow about my company’s management reshuffles.

The press is in a “ChatGTP is Skynet and will kill us all” panic.

OpenAI has a valuation of 90 billion and has 770 employees. Its product had the fastest adoption of any tech release in history. It’s no longer a small startup (not that it ever was - it was started with $100 million from Elon Musk).

Furthermore, there is a very important factor to all of this: the realization that the AI genie is out of the bottle, and people will be powerless to stop it. The board tried to fire Sam Altman because in their opinion he was moving too fast and too recklessly and ot keeping them in the loop or something. The result was the people who instigated the firing are gone, and Sam Altman is in an even stronger position than before. And if the board had ‘won’, Sam was going to take just about the whole company over to Microsoft and just continue what he’s been doing.

So this turned out to be a big deal, and quite newsworthy.

Well, if the rumor about one factor in the firing being a letter about an AI “breakthrough” that “could threaten humanity” is true, that’s a pretty good reason to pay attention, I’d say…

ChatGPT is likely, by far, the best known AI tool among the general public. It got a great deal of press in the weeks and months following its launch at the end of November, 2022 (i.e., just under one year ago), and, as @Sam_Stone says, it saw incredibly rapid growth – as per Wikipedia:

Even before the events of the past week, AI continues to get a ton of press coverage (and not just in the business or tech press), and ChatGPT is very frequently mentioned in those stories.

.

Even this kind of AI panic hype is true that still doesn’t make the comings and goings at the management level newsworthy. If there is actually something concerning about ChatGPTs upcoming tech then report on that.

But this is the real world not a 60s horror movie Sam Altman and his cronies are not evil geniuses coming up with monstrous devices no one else in the world could conceive of. Sacking or not sacking Sam Altman will make zero difference to the path AI technology takes. In the real world I would be amazed if OpenAi are as much as a year ahead of their competitors.

If there is something untoward coming down the pipe in Ai technology (other than the stuff we know about) then that is what needs to be reported on not the personality clashes in the upper echelons of OpenAI, I really could not care less about that.

Yellowstone has been on hiatus for nearly a year, the Israel/Gaza conflict is too polarizing to casually dish on, and the Billionaire Bro Row have all settled into their respective post-divorce/proto-fascist roles. So this stupid nerd soap opera is the new hot-off-the-press item until Donald Trump makes an offense comment about Kate Bolduan or a Mediterranean cruise ship gets hijacked by the Princely Liechtenstein Security Corps Elite Kommandokorps in a daring mid-day raid.

Stranger

Sorry, but I have never seen a story like this before, probably because no other company was set up with such a weird structure.

Show me another company anywhere near this size that had the following events happen. I think this is the rough timeline:

  • Founder and CEO fired by a ‘safety’ board. OpenAI started out as a non-profit, then the non-profit spun off a for-profit, with the non-profit board remaining for ‘safety’ and a for-profit board with a fiduciary responsibility to make a profit. The two boards had competing goals, leading to this crash.

  • Microsoft’s Satya Nadella parachutes in, upbraiding the board and demanding that Sam be reinstated. Sam demands that the board be terminated before he’ll come back. The board agrees, and it looks like crisis solved.

  • Sam comes back to work, and they give him a ‘guest’ badge and tell him that he’s not coming back after all, and the board is going nowhere.

  • Microsoft announces that Sam and ‘other workers’ from OpenAI will be going to Microsoft to start their own AI division.

  • Those ‘other workers’ turn out to be just about the entire company. 710 of 770 employees sign a letter saying that if the board does not resign and reinstate Sam, they will all quit en masse. The letter was incredibly scathing.

  • Microsoft announces that any workers at OpenAI who want to come to Microsoft will retain their full pay and benefits.

At this point it looked like a 90 billion company might just fold completely and essentially be recreated in Mocrosoft. Microsoft would get a 90 billion company for the $13 billion they had already invested. Amazing.

But then the saga took anpther twist. The board hires a new CEO, the head of Twitch. But the CEO takes a look at what happened, and tells the board, “Give me a good explanation for the firing of Sam Altman. or I walk.” They don’t, he does.

Two members of the board resign. One turns out to have been writing articles critical of OpenAI and favorable to its competitors. Strange. She also wrote that she didn’t care if she destroyed OpenAI if that would fulfill the board’s goal of ‘safety’. The shareholders did not take kindly to that.

Anyway, Ilya Sutskever, a co-founder and head scientist who was on the safety board and voted to nuke Sam, suddenly recants and says he made a horrible mistake. Larry Summers is added to the board, and it votes to retain Sam and Brockman and the rest.

This was quite the soap opera, and it was newsworthy as hell with an AI backdrop. . I’ve seen startups fail for lots of goofy reasons, and had my own startup software company. I have never seen anything this crazy.

Yes. This.

This management tussle is the soap-opera version of “Is AI our savior or a frankincense monster that will destroy us all?” The warring boards lurch left, then right, then back left.

The ultimate human interest story as all of us have in interest in what’s about to unfold.

I get that, but while I’ve never seen that exact series of events before it really doesn’t seem completely off the wall and crazy, either. I’ve seen plenty of flip flopping in the higher echelons of the board and C-suite over who is getting fired and why. I’ve also seen some pretty soap opera moments in companies I’ve worked at, including:

  • A board member (and investor) trying to get the CEO fired and change company direction because of “economic conditions” where it transpired the “economic conditions” in question were his mistress had convinced him to invest all his money on a south American race course that did not in fact exist.
  • A head of studio at a major international games company telling the CEO she could “suck his dick”.

And, again, this is a medium sized company with a fairly insignificant revenue. Yeah they are some quite newsworthy things but that doesn’t mean I wanted to be included on their HR emails

Sure, personality conflicts happen all the time, and the Venn diagram of founders and people with issues has a large overlap. But come on… OpenAI has been in the news constantly for a year, and AI is poised to change our lives. Have you ever seen personality conflicts actually come close to destroying a 90B company in a day? Or a conflict in which one half of the company says their own products are a threat to humanity, and the other demands full steam ahead?

This story was a lot bigger than, “Sam Altman and a board member get in a spat.” It has ramifications for thr entire industry, and for that matter, mankind. There is a huge debate over when to trade growth and innovation for safety and alignment in thr AI space, and this story is at the heart of that as well.

Yeah, why all the attention for Taylor Swift? She’s just a girl singer. I’ve heard lots of other girl singers.

Here’s the thing though, that’s a good analogy, as OpenAI are not the Taylor Swift of companies. This would be like the BBC front page reporting on who Billie Eilish is dating right now.

Were your companies getting tons of coverage in the press before? Did your CEO testify before Congress? Altman was the face of OpenAI, and when he was fired by the “who the fuck are they?” board for no obvious reason, it was big news. And things just got more interesting after that.
And if you think revenue is an important measure of importance of startups, you don’t understand Silicon Valley very well.

That reporters could use ChatGPT, and are worried that their jobs might go away because of it, probably had something to do with it also.

I’m incensed by this auto-correction.

If it was intentional, I demyrrh to your comic genius.

Gold? Am I doing this right?

Over at Bloomberg, Matt Levine has a solid and entertaining take on the OpenAI saga. He’s a funny writer, who routinely backpeddles in his extensive footnotes.

Gift links only active for 7 days (I get 5 per month):
https://www.bloomberg.com/opinion/articles/2023-11-20/who-controls-openai?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTcwMDcxMjkxMSwiZXhwIjoxNzAxMzE3NzExLCJhcnRpY2xlSWQiOiJTNEZNNjREV1gyUFMwMSIsImJjb25uZWN0SWQiOiI3RUU0QUE0NTMyMEM0Mjk0QTBDQTNBODJERUQyQ0YyOSJ9.ejoEN7zaASSQRpEeCWZ3tuAHO2k8JpcH_lXeyGyNXIE

https://www.bloomberg.com/opinion/articles/2023-11-21/openai-is-a-strange-nonprofit?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTcwMDcxMjk2OSwiZXhwIjoxNzAxMzE3NzY5LCJhcnRpY2xlSWQiOiJTNEhKMUFEV1gyUFMwMSIsImJjb25uZWN0SWQiOiI3RUU0QUE0NTMyMEM0Mjk0QTBDQTNBODJERUQyQ0YyOSJ9.ZP8ooWczaqwMlx0LJgaC69hLmk5sdW9BH634T5Fifq8

When you click through, you’ll have to hold down your mouse to prove you’re not a robot.

OpenAI is a very strange nonprofit! Its stated mission is “building safe and beneficial artificial general intelligence for the benefit of humanity,” but in the unavoidably sci-fi world of artificial intelligence developers, that mission has a bit of a flavor of “building artificial intelligence very very carefully and being ready to shut it down at any point if it looks likely to go rogue and kill all of humanity.” The mission is “build AI, but not too much of it, or too quickly, or too commercially.”

From the staff’s perspective, the board is a bunch of outsiders whose main features are (1) they are worried about AI safety and (2) they don’t work at OpenAI. (Well, three of them do, but three — a majority of those who voted to oust Altman — don’t.) They have no idea! They are meddling in stuff — AI research but also intra-company dynamics — that they don’t really understand, driven by an abstract sense of mission. Which kind of is the job of a nonprofit board, but which will reasonably annoy the staff.

Also, of course, the material conditions of the OpenAI staff are pretty unusual for a nonprofit: They can get paid millions of dollars a year and they own equity in the for-profit subsidiary, equity that they were about to be able to sell at an $86 billion valuation. When the board is like “no, the mission requires us to zero your equity and cut off our own future funding,” I mean, maybe that is very noble and mission-driven of the board. But, just economically, it is rough on the staff.

I don’t mean to say that the board is right! The board really are outside kibbitzers! Between OpenAI’s staff, who know what they’re talking about but also kinda like building AI, and OpenAI’s board, who lean more to being AI-skeptical outsiders, I guess I’d bet on the staff being right.[2]

[2] Obviously if the staff is wrong this is, like, the highest-stakes bet in the history of human civilization? AI stuff is so weird man.

In the other article Levine ponders a few sci-fi scenarios.

From a revenue standpoint, certainly not. From a level of “overall breathless media coverage” standpoint, over the past year, they kinda are. The only tech company I can think of that has gotten more coverage in the past year would be Twitter/X.

Microsoft was up to $2700 billion. Taylor Swift’s billionaire value is barely a rounding error in that.

Their combined market cap now stands at $11.8 trillion, according to Refinitiv – almost three times the size of Germany’s economy.

$11800 billion to 1. So why shouldn’t they be getting all the headlines all the time?

AI. AI. AI. AI.

Should be global warming, but both are existential threats to humanity. Taylor Swift is a mild momemtary diversion.

IIRC Microsoft owns 49% of the company and was not consulted about this at all which pissed them off.

Overall it seems that while what the board did was legal the firing was not done in a usual manner/process and came as a surprise to many. I think all but one board member are leaving/fired over this.