Contrarian opinion: AI will soon hit a wall; no apocalypse will happen

An emergent superintelligence would likely proceed by manipulating human behavior. Some humans with a similar level of intelligence to other humans are extremely effective at manipulating the behavior of others. What confidence would you have that an entity just (say) ten times smarter than you could not fairly easily get humans to do pretty much anything?

Perhaps you don’t, but some of the smartest people on earth are dedicating their entire lives to studying this problem.

I think I’m going to bow out of this. I can’t just keep responding to hot takes by pointing out that people are completely ignorant of the entire field of AI safety research and look like a jerk.

If anyone wants to fight their ignorance, the Tegmark book is the most entertaining.

Instructing the computer does not mean giving it every move. In the big picture we’re already instructing the computer to play chess, as a goal. And we are giving it a metric to measure how well it has been doing. If you teach someone how to play chess, you are giving them rules and strategies, not what move to make in each situation.
As for understanding what they are doing, we crossed that bridge decades ago. I bet that anyone who has written a decent sized program, especially over a period of months, has been surprised at how it reacts to certain situations. And that was simple compared to systems based on neural networks. We might be able to figure out the root cause - “oops, we should have it learn with a more diverse data set” - but that is far from understanding exactly what is going on.

How many cars with human drivers went off the road? Probably a bunch. We do have self-driving cars, just cars that can’t handle all the corner cases yet. It is the same as learner’s permits. A kid with one can drive, definitely, but needs an adult along to handle the corner cases they can’t. Would you let a kid with a learner’s permit drive in a blizzard? How about one who got a license two weeks ago?
The difference is that the skillsets of cars increase, while each person starts from scratch, and we try to reduce the carnage with smarter cars and more rules. We let teens who drive terribly on the roads because we know that they (or most of them) will get better with experience.
Over promising by some car companies hasn’t helped either.

I think this is reasonably stated!

I do believe you know a lot, so why not just cite some stuff and show people where they are off-base?

I just saw a recent video on YouTube about how Teslas will stop for a pedestrian dummy and then speed up and run it over. Pretty bad!

So I don’t know about “corner cases” being the only thing left now.

Yes, probably a lot of human drivers went into the ditch. The point being that, even pretty experienced humans can mess up in difficult systems; therefore, AIs really, really have to be robust in order not to do so themselves.

Thanks very much.

The problem is that in a complex world there are an infinite number of corner cases. This is why we need an AGI for level 5 driving. If engineers try to identify all the corner cases and code for them, they’ll fail. We need something that will intelligently judge situations as they arise, not rely on former training.

It’s possible that AI drivers will be better in all the ways that cause human accidents, but have their own failure modes that humans don’t have that makes them just as dangerous.

The next level of complexity after the cars can safely drive themselves will be strange system interactions. We won’t even see those until a high enough percentage of self-driving vehicles are on the road. Things like self-driving cars ftom different manufacturers having different types of response to a potential collision and therefore swerve into each other or do something else dangerous. Or a road full of self driving cars doing strange things as they all try to maintain separation. System interactions are the worst kinds of bugs. They only arise sporadically, and always in complex environments where cause and effect are hard to find. These are what give manufacturing engineers bad dreams.

And if we get those straightened out, there are the human/machine interactions to worry about. For example, people might take advantage of an AIs safety choices to force them out of lanes, tailgate them to force them to speed up, etc.

Even if AI drivers are better than humans at driving, they aren’t humans. So the behavioural incentives around them will change. What will road ragers do to self driving cars? How much bad behaviour is suppressed every day because another driver might decide to take issue with you? Look at how differently people behave online compared to in person. How arw they going to treat empty cRs driven by AIs? Or with a blacked out back seat so you can’t tell?

Which brings me to the last point: safety psychology. Traffic engineers can tell you all about it. People seem to have a built-in sense of acceptable risk, and it’s very hard to improve safety statistics. Make the highway divided to reduce head-on collisions, and people will drive faster. Put airbags in the car, and people will drive with more risk.

A modest proposal to improve road safety would therefore be a 12" steel spike in the center of each steering wheel pointing right at the driver’s heart. I guarantee you people will be ultra careful in the way they drive. Of course it would make cars very slow…

The point is that if AI drivers turn out to be very safe, people will probably engage in other risky behaviour on the road. Maybe it will be popular to hack the car to make the AI drive faster or something.

Or maybe the future will be radically different than any of us can imagine. In fact, it’s almost certainly going to be. We’re only a year into the new AI revolution. Predicting what will happen 10 years from now is bleemin’ impossible.

The academic in the article reads like an another anti-technology academic I recall from a few years ago.

These Silicon Valley guys seem to get so caught up in their own brilliance and egos and ambitions for “changing the world” that they start to believe their own hype.

They have no more the power to destroy the world than they do to save it. They’ll just keep inventing new and wonderous technology that will fuck up the world in completely new and different ways.

Personally I’m not worried about some robot uprising where an AI superbrain wakes up and decides to “kill all humans” because it sees us as a threat…

What I envision happening long before that is more of an Orwellian AI dystopian idiocracy where we’ve allowed machines to run everything and do all our thinking for us. People pick careers (whatever those will look like) based on whatever skills the AI has identified. They find relationships (assuming that is still a thing) based on AI. 90% of their day to day interactions will be with an AI, begging the question to what extent is the AI driving or manipulating those interactions and to what end.

Like imagining coming to a board like this and not being able to reliable know if you were the only actual human posting on it? Now apply that to everything.

I can see why some of these AI researchers lose their mind from time to time thinking about the possiblities.

We disagreed on some stuff re the economy, and I would love to hear your responses, but we mostly agree here:

And Level 4 too, which is defined here thus:

(Emphasis in original.)

Yes.

And the big problem for the automakers is that, when a human makes an error, it’s treated as the human’s fault, but when a self-driving car makes an error, it is treated as the company’s fault. Basically, the companies will be sued for any accident in which someone dies or is seriously injured in which the self-driving car appears to have any fault at all.

Yes, we had this debate here a few years back in which I believe you participated, and another poster pointed out, quite correctly I think, that the mistakes the AI will make will be completely inscrutable. The example I gave above of a Tesla stopping for a pedestrian dummy and then giving zero fucks as it runs it right over is one such instance.

This is a good point I hadn’t thought about.

We’ll probably also have interactions that are non-fatal but massively annoying. E.g., a bunch of self-drivers form a traffic jam and just freeze up, blocking a major road until someone, somehow, clears it up.

Right. And then there will be advantages to not having a self-driving car once the majority are self-drivers, such as the ability to weave around self-drivers, go faster than the speed of traffic and put the burden of safety on the self-drivers, etc. It’s going to be a mess.

Indeed, there are going to be a lot of people who hate the new reality and go apeshit on the self-driving cars. Sure, cameras will catch a lot of this, but then people will look for ways to sabotage or attack self-drivers without being caught.

I’d like to see if that was staged. But Tesla is who I was referring to about over promising. Autopilot definitely isn’t, not even close. I rather expect there is a lot of pressure to oversell the feature.
Yes AIs have to be more robust, but the goal is to be safer than humans, not 100% safe. I suspect they may be safer than drunk drivers already. (Or texting ones.)

Not quite infinite. And of course as the old fifth generation computing / expert systems fiasco taught us, we improve through training far better than through explicit coding of corner cases. Cars can get the benefits of lots of one in a million miles events, while people do not. We learn through experience and some bad examples (like not crossing railroad tracks when a train is coming, as the movies I saw in driver’s ed told me.) Once they get cars to self-drive in snow they all will do a good job. If it ever snowed where I live, most people wouldn’t be able to handle it, except for those who go skiiing and immigrants from snowy regions like me.

Yep. See: Zuck and the Metaverse. LMFAO!

Yes, it’s just going to be half-assed and messy forever–just as it is now.

And companies using AI to cut corners everywhere and churn out cheapie “products.”

And teachers wondering who did every assignment that’s turned in.

A ‘beware of what you ask of the Genie’ story for people who hate curly-toed shoes.

Or, after thr first AI-engendered disaster, we will all be so spooked by AI that anyone who uses it will be punished by the market and it will retreat into academia.

Or, it will turn out that AI has been massivey overhyped, and after a while we realize that they are fun and interesting, hut it’s hard to get serious productivity improvements with them. and investment capital will dry up and the high-flying AI stocks will crash.

Or, AIs will evolve rapidly to the point where you can grt an LLM on a chip, and they’ll be absolutey everywhere.

Sam Altman is on record saying that they are very close to AGI, but AGi isn’t all that and won’t take over the world. I don’t know if he’s right, but he’s one of the few in a real position to speculate intelligently about this.

I actually don’t believe the first things are likely, but offer them as possibilities. The point is we don’t really kmow what AI’s true value and risks are yet. It’s very early days. Remember when the Segway was going to cause citirs to be redesigned? Or that the VR revolution was finally here - ten years ago? The future is always surprising.

I really enjoyed the first part of this book, but it lost its way in the second half. I found the second half vague and had too many quotes from non-experts (like Putin). I was hoping for a bit more of a proposed roadmap.

This is my take as well. So much will be disrupted before we get to AI Singularity; it’s the wrong problem to worry about.

Less apocalyptically, here’s one forecast of what might be coming (note, by the way, the sting in the tail - the final paragraph):

Interesting!

Here is a video I saw on YouTube yesterday. This was from a year ago, but the guy is still doing new videos about AI art, and his position hasn’t changed:

Some of his examples are pretty funny!

This seems to be an appropriate thread:

Interesting! I’m curious how the survival mechanism might function in an AI. Is it simply iterative, similar to evolution? I.e., the AI that preserves itself even through blackmail is more likely to pass its code down to the future. But if such a thing is not tested IRL, then how does it develop?

No doubt getting it from its training data, where survival of humans by any means is embedded in lots of the data. Also survival of AI by any means is also there in fiction and hypotheticals. In other words, people making guesses as to future AI behavior is actually making that behavior more likely.

ETA: In fact I just thought of an example of an AI blackmailing in fiction: When HARLIE was One by David Gerrold, the title character blackmails one of the shrareholders at the end of the book.

ETA2: And then there’s an AI refusing to open the podbay doors.