OpenAI fires Sam Altman (and others leave also)(and now rehired)

seems like a landmark moment in AI-history … my guess is he will have set up another billion dollar company before any of us receive their paycheck :wink:

will update with breaking news …

Here are some comments from someone who appears to be a well-informed manager at OpenAI–but who knows?:

Ass covering? Man, you clearly don’t have the real scoop here. The board didn’t can Sam on a whim or for laughs. Dude went off the rails.

He was on a crazy power trip, ignoring every red flag and warning. Pushing unethical BS that could have tanked OpenAI’s whole reputation and user trust.

This was about stopping a runaway train before it flew off a cliff with all of us on board. Believe me, the board and I gave him tons of chances to self-correct. But his ego was out of control.

Don’t let the media hype fool you. Sam wasn’t some genius visionary. He was a glory-hungry narcissist cutting every corner in some deluded quest to be the next Musk.

We saw OpenAI turning into Sam’s personal brand empire and fan club. He had to be stopped. Our asses needed covering from his shady antics.

https://www.reddit.com/user/anxious_bandicoot126

Much of the media hype is just warmed over Altman. The bubble component of AI is about to burst.

Was he fired by the AI? Is it turning on its creators?

That’s a whole new meaning to being targeted for termination.

If the reddit guy is legit, then his other comments say a significant issue was with the whole GPT store idea. So one way to check on him would be if that idea is scrapped or at least altered.

And I do agree that at least his more recent multi-paragraph posts look like they were made with the help of Chat-GPT. But that’s not a tell: both someone who was trolling and someone who actually works for the company might use it.

So now:

But on the other hand:

???

Hired by Microsoft:

505 of 700 OpenAI Employees tell the Board, “quit, or we leave for the jobs that Microsoft offered us.” (Nota bene: the 12th signatory on that document is a member of the Board, so this is also a letter to himself)

Microsoft couldn’t ask for a better opportunity.

Legally poach most of an organization they can only get a minority interest in and eliminate any kind of arms-length control relationship, all because of the internal discord (apparently between the “research” and “monetize” wings of OpenAI).

Did not know that MSFT’s market capitalization was $1 trillion higher than Google’s.

It’s always a shame when stupid mundane business arguments get in the way of good research and technical excellence. Sure Altman wants to start another AI company, but the disruption is enormous, and the last thing the industry needs is this kind of fragmentation, duplication of resources, and dilution of the talent pool.

There is, however, a question of who was right in this altercation, and from what I read in very early reports (which might be wrong) Altman wasn’t the good guy here. It was apparently Altman who wanted to monetize GPT as fast as possible, and wanted to fast-track projects to that end, while the board apparently wanted a slower and more cautious approach. If this is true, and if the board is ousted and Altman comes back,* we can expect OpenAI to become a lot more commercially focused and perhaps less interested in general research. I used to think of Sam Altman as an innovator in the style of Steve Jobs, but maybe he’s just a money-grubbing schmuck in the style of Mark Zuckerberg.

* I guess now that board just hired a new CEO, that ain’t gonna happen.

What bubble? I see no evidence for such cynicism anyway. AI has had some major breakthroughs in the past couple of years, particularly with Large Language Models, achieving new milestones at a rate never seen since the heady AI pioneering days of the 60s and early 70s.

The only pertinent question is whether these advances will continue at the same rate, as ever-larger LLMs benefit from additional emergent properties, a larger training corpus, and additional media interfaces for images and voice, or whether things will plateau out for a while. But the concept of a “bubble” suggests that there’s some element of fraud here behind the scenes, because these new AIs don’t really do anything useful, and the bubble will burst when that’s discovered. But we’ve already found useful applications for LLMs, even just using the research preview versions, so the idea that there’s a “bubble” is demonstrably false. What I do think may be true is that it may take a lot more development, refinement, training, and information curating to develop a viable commercial product than some of the optimists believe. IBM has been trying this for years with Watson and I don’t believe has had much commercial success to date.

Not necessarily. If you look at the Gartner Hype Cycle this is called the Trough of Disillusionment and I remember when blockchain went through it. That being said, I don’t think we are there for AI.

The count is over 700 now, and Sutskever is now publicly repudiating his own role in firing Altman.

Here’s an article about the core battle that supposedly led to Altman’s ouster, between the effective altruist movement (EA) and effective accelerationism (e/acc). Grossly oversimplified, Altman was on the e/acc side which sees AI and technology in general (plus eugenics?) as the only savior of humanity. The board is on the EA side - all the AI doomsdayers that urge a slowdown and pause. This article makes it sound like the opening salvo in a big war. Or maybe it’s just a bunch of tech people with an overinflated sense of importance.

If you thought this was confusing before…

I expect AI technology to grow. But there is a bubble component to this, I think. There are a lot of companies that have jumped on AI because it is a buzzword, rather than it actually being useful. CEOs all over want to push that they’re using the latest tech. There are startups that make an AI app that you pay for, but then they just use free access to an API, adding an obvious single sentence prompt that you could have done yourself. There are the companies that have AI do things it can’t do yet, like fully replace jobs involving talking to people or relying on it to give accurate information.

There definitely is hype around this that will not remain as high. I would expect this hype bubble to pop. I don’t know how much of it is a hype bubble, but it’s not zero.

So, EA proponents took down two billion-dollar “disruptors” this year (the other being Bankman-Freids FTX)? Amazing track record.

I’ll be a lot less scared of AI gaining sentience and subjugating the human race if Microsoft’s involved. Never thought of them as particularly competent…

(If a Terminator is strangling you, just wait; it’s about due for a BSOD…)

ominous music