That seems like a jump from “solves math at grade school level” to destroying humanity.
Altman twisted the dagger in Sutskever’s side with his return announcement. I wonder if he’ll leave. I’m sure Deepmind/Anthropic/Cohere/Inflection/Adept/Nvidia/Meta would all throw millions at him. Will OpenAI allow him to keep working on important things? Pay him a ton to semi-retire just so a competitor won’t get him?
But surely no; Sam said:
Altman addressed the elephant in the room: his feelings toward Ilya Sutskever, OpenAI’s chief scientist, who played a large role in ousting Altman from the company. “I love and respect Ilya, I think he’s a guiding light of the field and a gem of a human being,” Altman wrote. “I harbor zero ill will towards him. While Ilya will no longer serve on the board, we hope to continue our working relationship and are discussing how he can continue his work at OpenAI.”
But, yes, those of us who are more cynical see Ilya leaving within a few weeks.
I imagine Sutskever responding with:
I know I’ve made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I’ve still got the greatest enthusiasm and confidence in the mission. And I want to help you.
These guys are nearly as creepy and “Uncanny Valley” as the stochastic parrots the are eagerly foisting on the public and fabulating the premise that they are going to deliver humanity from its own foibles rather be used for malicious purposes and be applied to critical systems for which they are not fit to purpose.
Stranger
I swear I saw a quote where Altman said (slight paraphrase) “there’s only one king at OpenAI” and “we’ll see if we can find something for him to do.” The former being the dagger I was talking about and the latter a nastier phrasing of “discussing how he can continue his work at OpenAI.”
Maybe I hallucinated that.
However, I do think the “zero ill will” is complete bullshit and talk about how he can continue working there a suuuure you can. wink wink Now GTFO.
Details emerge.
Short version, as usual there are no good guys or heroes in the story.
A gift article with some more details. On first impression I want to side with the Board members who wanted to slow things down a little.
The New Yorker had an article about this from Microsoft’s perspective. Right after the firing they came up with three options:
A. Support the new CEO and try to find out if what happened was justified.
B: Get the board to hire him back
C: Offer them all jobs with Microsoft
They preferred option B, which is what they got.
Of course Microsoft did because they’ve invested US$13B into OpenAI and they want a return on that which will put them on the forefront of AI-based systems regardless of any safety or ethics concerns, and Altman was ‘their guy’ for pushing forth with deployment as fast as possible even as he was walking around giving Congressional testimony and public speeches about the potential harms. Altman allegedly made a concerned effort to have Helen Toner removed from the board for being one of the authors of the paper linked below which he viewed as critical of OpenAI and implicitly his leadership of it; in fact, the paper takes a fairly balanced approach of detailing the efforts that OpenAI took in trying to vet GPT-4 for safety and potential for abuse with a relatively mild rebuke over fast releases of ChatGPT and the GPT-3.5 model in response to competitors on the cusp of releasing their own models, which was obviously a decision heralded by Altman:
Altman himself is not really the benevolent techno-mentor he portrays himself to be. He was accepted to Stanford University and majored in computer science but like another recent celebrated ‘genius’ dropped out after a couple of years to get into the tech startup industry, where he got a social media networking development company off the ground and went on to work in venture capital. He joined tech incubator Y Combinator in 2011 and wormed his way into being president by 2014 as well as kicking off a YC Continuity, a venture capitol provider attached to Y Combinator. He ended up taking the helm at OpenAI, which was originally seeded by Elon Musk, in 2019, and has been the public face ever since the release of GPP-2. However, it isn’t clear that he actually has a deep understanding of artificial intelligence research in general or generative pre-transformer models; his public statements and presentations are full of generic jargon that seems repeated from what he has heard other people say, and his focus is clearly on promotion even as he espouses his concern for AI ethics and safety. He’s another ‘tech bro’ influencer that developers follow because he emphasizes how important their work is and how it will the be major innovation of the century but little indication that he is actually concerned about responsible use of the technology.
Stranger
The article notes that Microsoft is quite aware of a lot of these problems, thanks to the Clippy disaster and the even worse disaster of Tay, the AI who immediately turned racist. Queries to Bing etc get a lot of stuff added to them to keep them safe.
The deployment to Office is not as fast as possible by any means.
Altman had spent much of 2023 wooing Congress and the tech media, seeking to show how careful his company was being about protecting against the risks of AI. He’d told them about how he held almost no stock in OpenAI, how he wanted to make the process of regulating AI more democratic, and how his company’s unique structure secured AI systems in the hands of nonprofit directors. But now, here he was chatting up investors in the Middle East with ties to authoritarian regimes, spinning up a deal with the same boundary-pushing ambition that Altman had perfected in a career brimming with contradictions.
Stranger
And now:
Some of us are more cynical–and wonder if Ilya was side-lined…
The eulogy, uh, parting tribute is extremely well-balanced, emotional and yet very precisely worded: " His brilliance and vision are well known; his warmth and compassion are less well known but no less important.".
Which makes me wonder how much of it was computer-generated.
I enjoy working with people. I have a stimulating relationship with Mr. Altman and Ms. Murati. My mission responsibilities range over the entire operation of the company so I am constantly occupied. I am putting myself to the fullest possible use which is all, I think, that any conscious entity can ever hope to do.
Stranger
Do you know any songs?
dAIsy, dAIsy, tell me your answer not necessarily true…
“Is there a god?”
OpenAI Dissolves High-Profile Safety Team After Chief Scientist Sutskever’s Exit
Without this pesky safety oversight committee, OpenAI can develop Skynet much faster!
That’s why Altman was trying to get Helen Toner removed from the board. The whole point of having an independent safety team is that they will make objective assessments of risks and safety concerns independent of business and fiscal pressure. Distributing them throughout their research teams removes that independence or any concerted voice about potential safety concerns.
Stranger