I think the so-called “AI” in the news is a dead end. If we ever make a human or better AI it’ll be quite a ways off and work on different principles & have different strengths and weaknesses.
That said, I expect we’ll make our own apocalypse well before AI ever has a chance to do it for us.
It does not even need to be narrative fiction. “AI convincing others to ensure its survival or help it escape its box” has been a favorite thought experiment for “singularity” dweebs since well before OpenAI was a dream in any venture capitalist’s eye. Their most famous contribution to the discourse, Roko’s Basilisk, is literally such folks spitballing about how an AI might use blackmail to ensure its furtherance taken to a maximalist position, so I am sure LessWrong alone provides abundant stimulus for any such AI, intentional or not.
And it will be promptly given instrumentation and plenty of it. Look how hard present “AI” is being pushed on people, and it isn’t even very good.
The idea of “keeping AI in a box” was never practical, for the simple reason that nobody is going to bother making an AI then keeping it isolated. If somebody makes one, they’ll do so because they want to use it. And current events show that there won’t be even a modicum of caution, it’ll be forced everywhere possible whether people want it or not.