Is the AGI risk getting a bit overstated?

I posted this in the Pit thread but it’s relevant here:

Cal Newport, a professor of computer science at Georgetown, has a short blog post about Sora, Open AI’s new video generating software, and the attempt to monetize it through a Tik-Tok style video sharing framework.

He says, sensibly, that these cheap monetization schemes do not bode well for the idea that this will be a revolutionary technology. The move doesn’t make sense for a business that is planning to deliver a huge ROI.

A company that still believes that its technology was imminently going to run large swathes of the economy, and would be so powerful as to reconfigure our experience of the world as we know it, wouldn’t be seeking to make a quick buck selling ads against deep fake videos of historical figures wrestling. They also wouldn’t be entertaining the idea, as Altman did last week, that they might soon start offering an age-gated version of ChatGPT so that adults could enjoy AI-generated “erotica.”

I’m thinking maybe these companies are starting to wake up. Or maybe they knew all along but it’s getting harder to pretend.

And this

I thought this meme on OpenAI’s new direction was funny

There’s a (possibly apocryphal) story from the early days of assembly line robots, call it the 1970s.

Mr. Ford, of Ford Motor Company, is showing off the shiny new robotic portion of the first such assembly line to the then-head of the UAW.

Mr. Ford: Good luck unionizing these workers, Mr. UAW President. [grin].
Mr. UAW: Good luck getting them to buy your cars, Mr. Ford.

At the time robotics was widely feared as a giant immediate job-killer. Now from 50 years later that hasn’t quite panned out.

Then again, one hell of a lot of people in the automotive supply chain can no longer afford to buy the new cars the robots make, so in that sense management / shareholders did succeed in sucking all the gravy up to their level, leaving everybody else grasping on tiptoes for scraps.

Management / ownership still won, just not as quickly or as comprehensively as initially feared.

AI has the potential to be the same thing x100.

I think the conversation in the boardroom immediately following that statement would be something like:
Board members: [staring in wordless confusion. Someone coughs. There is too much uncomfortable silence]
CEO: Nah, I’m just messin’ with ya. We’re going to cut costs for sure, but the retail prices can keep going up. Because, you know. Profit.

Given that the auto companies almost vanished a while ago, you can hardly say they won. Plus, when I was a kid people bought cars every two years. Today it is more like every 10 years. Better quality enabled that, but I’d like to see per capita sales over time.

I think you underestimate how docile people in most of the world are. In America it is the deluded and deranged who resort to violence. The “great revolutions” of the past came when people were desperate but the billionaire overlords will make sure the masses have food to eat.

True AGI may be very hard to achieve, but AI’s that can fool many of us into thinking they’re intelligent are almost upon us already, and will pose severe dangers.

Considering the longstanding crusade from the Right against food aid of any kind, I doubt it. They’ll want the poor to starve.

Given that SNAP ends next week, we’re about to see what happens when millions of Americans truly can’t afford food. And all of them at once.

It wouldn’t surprise me if he’s found a way for the government to pay for a lot of stuff since the shutdown. So I wouldn’t put it past him to find a way to give people their SNAP as long as he can send them new cards that say “SNAP benefits - brought to you by Donald Trump despite the radical Democrats trying to stop me!”

Going back to AGI, I’ve been thinking about this some more in the context of reading about cybernetics and viable systems and I think one of the interesting things about AGI as an x-risk is that it not only involves making claims about the potential capabilities of computing technology, it is also implicitly makes some pretty astonishing claims about the world - by which I mean the technologico-economic-politico-sociological teeming mass of information and relationships and materiality in which we all live.

For AGI to do what both proponents and opponents claim for it - to be able to understand the world at a hitherto unimaginable level and also to manipulate it at a hitherto unimaginable level in order to transform it from one state into a specific other state - requires AFAICS that the world - again, literally everything about human society across the globe - have following characteristics:

That it be legible - that is, that everything about it can in theory be reduced to what is effectively a big data table of inputs such that some system can grasp the whole matter in detail and in its entirety.

That it be predictable - that is, that in theory future states of the world can be correctly identified in advance based solely on its current state..

That it be directable - that is, that given a specific desired output state, an algorithm can identify which changes to which specific inputs will result in the desired state.

That it be manipulable - that is, that there exist sufficient and sufficiently effective “levers” by which such an algorithmic transformation can be actively applied to the world as it is in a dependable and minimally error free fashion.

There is a principle in cybernetics called the law of requisite variety. Which states simply that your control system must be at least as complicated as the system you are trying to control. When the system is the entire teeming mass of human society that implies a ludicrously vast control system indeed. I think there is a genuine question about whether such a control system is even theoretically possible, as in mathematically, and if so, whether its theoretically possible within the constraints of the resources currently available to humanity, and if so, whether its practically possible to bring about.

I am extremely skeptical.

It could be said that is simply a definitional problem. That is, that an AGI is of necessity capable of all the above, because that’s what an AGI is. I don’t think that’s right - I think you could have an Artificial General Intelligence of greater capabilities than the most intelligent humans, but that still a) didn’t have access to such an input table, b) couldn’t construct one and c) couldn’t manipulate it if it did. But if that is our definition of an AGI, then for me that massively reduces the chances that one could ever exist.

You’ve essentially posited that AGI-as-god-on-earth is computationally intractable. I fully agree.

That doesn’t mean a smart-enough & empowered-enough AgI couldn’t do a lot of change upon the world. With both intended and unintended consequences. Who among the humans wins and who loses due to all those changes is unpredictable. But somebody(ies) will fill both roles.

At the expense of (briefly) introducing politics, consider trump. Evidently not very intelligent by human standards, definitely not possessing a fully informed and accurate worldview, and definitely empowered. Despite those limitations, he’s certainly creating lots of changes, both intended and unintended by himself, much less by anyone else. And for often inscrutable reasons.

In essence he personifies the agency problem. Lotta folks voted for him to achieve outcome X, or Y, or Z. Instead he’s achieving A & B & C with occasional flashes of X and now they’re not sure how to switch him off or even steer him productively.

[/politics]

Point being, an AgI need not be omniscient and / or omnipotent to be an agent of great and potentially uncontrolled / ill-considered change.

All that may be true for it to have a true representation of the world. Which of course none of us have either.

It can be powerful for good or for catastrophic ends with a very imperfect model that it accepts as accurate enough.

I mean, it can if we let it. But while I can see that “this entity understands the world completely and can optimise our future” could be a convincing argument for handing over the keys and letting AGI drive, I don’t see “this entity kinda sorta understands the world and can come with a likely inadequate plan to optimise our future, which it may or may not be able to execute” really swaying people.

Trump is a good example here: if he is having success at changing the world to be the way he wants, it’s not because he has a good mental model of the world, nor a particularly sophisticated plan for changing it. It’s because he has power. I know how Trump got power (and don’t need to discuss it). But political and administrative power currently resides in institutions which at least nominally (still don’t want to discuss it) are subject democratic oversight, at least in the West (still, still, don’t want to discuss it), or to autocratic oversight elsewhere (I refer you to my earlier remarks). What is the mechanism by which this power is surrendered to the machine?

If it’s a super-AGI that can subtly affect the world due to its superior understanding, then there are various ways it can sieze, subvert or co-opt power. But if it’s a limited one that doesn’t have that level of foresight and manipultive dexterity, how have we surrendered power to it?

One plausible scenario is that belongs to oligarchs who let it drive parts of their businesses in ways that are beneficial to them, but harmful to Joe and Jane Average. While a confused and actively misled public gets passively herded right where the oligarch & their AI want them herded to.

Another scenario is the same basic idea, but instead of oligarchs, the AI owners / controllers are powerful bureaucrats within their governments. Or elected politicians.

Is US vernacular, the phrase “You can’t fight City Hall” dates back to before WW-II. It’s a wry commentary about the power of entrenched bureaucracy to be resistant to both individual citizen needs, and collective political control. It doesn’t require the presence of a malevolent Big Bad. Simple secrecy plus inertia = resistant.

“Oracle doesn’t have customers; it has prisoners.”

Stranger

I worked for Sun when we got acquired, and I had a good job right until I retired thanks to Larry Ellison being an idiot. So there’s that.

And we all got to see Iron Man 2 and 3 for free on work time with free soda and popcorn, thanks to Oracle putting product placement in it. Watch Tony Stark give a shout out to Larry Ellison in Iron Man 2. Watch Robert Downey Jr. cringe.

I mean, they also gave a cameo to Elon Musk in that movie (albeit, in kind of a servile fashion) that really hasn’t aged well. The irony there is that while Musk imagines himself to be a “real-life Tony Stark” (going so far as to have a replica Iron Man suit in a hallway at SpaceX near his workstation) I have always considered him more of a Justin Hammer, save that Sam Rockwell is more loquacious and definitely has better dance moves.

Stranger

I had a good insight today when Cal Newport was picking apart the claim that an LLM could do the work of a “Ph-D Level Mathematician.” Turns out they trained it on the work of actual Ph-D level mathematicians and then gave it a specific math-capability evaluation, which it did well on, but of course it totally biffed other math tests because it wasn’t trained to do those tests. If this is the state of LLMs today, that they can do pretty good on one specific math test that they are aggressively trained on, we are so far nowhere near the ability to replace entire workers in the majority of fields. Not only that, but I question how plausible that really is - how much “compute” would it take, how many specific data sets would have to be compiled, to make a machine look like it’s doing someone else’s job? Just think about one job - advanced mathematician, and imagine how much compute it would take given where we are right now, and then try to extrapolate that to all the jobs.

I think big businesses are going to tap out when they realize AI can’t deliver them a massively reduced workforce. We’re not anywhere near there and I doubt we will be in the next ten years. Current LLMs have specific use-cases where they can be helpful (though how helpful is debatable) but they can’t make judgments, they don’t even understand context, so they can’t replace a whole human being. (Also, if this becomes the mass firing scenario so many breathlessly predict, who is paying these subscription fees?)

I recently read this interview by Corey Doctorow about his book Enshittification, which I’m reading and is great. In it, he touches on AI and in particular how outsized the promises are relative to actual capability.

“They’re saying fire like 90 percent of your radiologists who have a $30 billion per year wage bill in the United States… Have the ones that remain rubber stamp the radiology reports at a rate that they cannot possibly have examined them and then turn them into the accountability sink when someone dies of cancer, right? So you look at all of those foundational problems with AI and you look at the answers that the AI sector has, like maybe if we keep shoveling words into the word guessing program it will become intelligent. Well, that’s like saying maybe if we keep breeding these horses to run faster, one of them will give birth to a locomotive.”

(The entire interview makes more sense if you’ve read the book, but there’s some interesting stuff in it, mostly about tech platforms and the evils of monopolistic corporations. Some of the stuff in the book made me so mad.)

LLMs will likely go the way of other shitty platforms. It’s going to lock in its users, start showing users ads (it already has laid the groundwork for that - again, not something that makes sense for a revolutionary technology) start being shitty to its users, and then it’s going to claw back the surplus from its advertisers, and we will probably always have to deal with shitty LLMs just like many of us have to deal with shitty social media and shitty Google and shitty Apple. It’s going to be yet another shitty tech platform because they can get away with it.

Can’t wait. Can’t guess how much money I’m going to lose when the bubble pops. But not as much as the suckers throwing their money after it right now.

With apologies for the lengthy satellite delay on this response:

There is a very persuasive argument that we have already developed a handful of incredibly slow moving artificial intelligences, and these are called: markets; corporations; bureaucracy.

What are these things but systems for processing information and making decisions, decisions that are not made by any one individual, and over which we have surprisingly little control.

Looked at in that light, it seems like a lot of AGI concern is in fact concern about the world as it is, but in mythic form.

“Here’s a scary story about humans building a system that ravages the planet in pursuit of a single-minded goal. Fair gives me the shivers. Ah well, now to take a big swig of coffee and take a look at trends in global temperatures.”

Looking at your above scenarios, they are just as plausible and unfortunate without involving AGI at all: oligarchs drive business to promote their own interests at the expense of Mr and Ms Average? Use media power to herd the public? Good thing we haven’t developed AGI and thus have been spared that outcome.