Contrarian opinion: AI will soon hit a wall; no apocalypse will happen

Here’s an interesting article about the doomsayers:

Nope.

Now, it’s true that AI has done some incredible things lately. I was playing around with Google’s Gemini the other day. Pretty amazing. So is Midjourney. Just as I felt about online maps and search engines in the late 1990s, I think, “Wow, this is already possible?!”

And I do think that AI is going to put a lot of downward pressure on jobs, but that’s due more to the penny-scraping, innovation-scarce nature of late stage capitalism than to the potential of the technology itself. E.g., sure, lots of small businesses are going to do their logo in Midjourney instead of hiring a human designer. Sigh.

But we are not about to see a rise of the machines that wipes out humanity. Here are my reasons:

1. We keep hitting AI walls right now.
Robert Fortner wrote this post about the limitations of voice transcription 14 years ago:

And have things improved much since 2014? Not that I can see. If AI is going to take over the world, it’s going to have to hear what people are saying, but my iPhone can barely transcribe a voicemail for me.

The same thing is true of self-driving cars. In 2019, from January to March, I was working in South Bend, Indiana, enduring every possible type of horrific winter driving environment: driving on ice, driving in slush, driving in drifting snow, driving in a blizzard, etc. (and the other drivers were tailgating me like mutherfuckers the whole time–what’s up with the driving culture there?!). Since there was a lot of yakking about self-driving cars back then, I thought about what it would take to create technology that would not crash or go off the road in such conditions. And the answer I came up with was: a hell of a fucking lot.

As a final example, I am a professional interpreter and translator (Japanese). I’ve done work for major automakers, etc., for decades. Japanese is a very hard language for AI to translate into or out of because you are not grammatically required to state the subject or direct or indirect objects in a sentence. Machine translation can do a lot, but it has the same issue that Gemini and ChatGPT do: there is always, always something fucked up in the text, and we humans either have to accept the text as is (very risky if it’s important) or put in a lot of work to root out the errors. I have, on occasion, had to fix the mistakes of an actual human translator who had done a shitty job (though still better than AI could do), and it’s more or less as much work to start from scratch.

To bridge the various gaps in AI, we are going to need AGI, i.e., strong AI. We are a long way from that, and there is no foreseeable timeline that takes us there (though I am not saying it’s impossible).

2. AI does not have a will.
By “will” I mean nothing overly philosophical or fancy. I mean simply that animals have drives and motivations while computers do not. Every day, you have to wake up, drink water and eat, and in general “take care of shit,” or you will feel discomfort, lose your job, die, etc.

AIs do not face fear of death or other consequences. An AI can only do what it is programmed to do, and if it reaches a barrier, it will not try to jump over it or get around it as though its life depends on it. We have had billions of years of evolution to program fear and pain into animals–we really feel it! It’s easy to imagine AIs being equally so motivated, but we have no proof of concept at this point, nor even a mere hint of such.

Further, why it assumed that AIs with such motivations would be, well, positively motivated? Humans have an extreme fear of death yet nevertheless choose suicide in significant numbers. Why wouldn’t a sentient or sufficiently intelligent AI simply turn itself off or choose the equivalent of silicon heroin instead of endeavoring to destroy humanity? My opinion is that this will be a be a big (though perhaps not insurmountable) barrier in creating AGI.

3. AIs would compete with each other.
The doomsayers seem to assume that a specific AI would not face any opposition from other AIs in its quest to wipe out humans. If we have learned anything from our observations of ecosystems, including their evolutionary history, it’s that there is always competition.

If one AI decides to be a dick and wipe out humans, then it stands to reason that another AI, if only to be a dick to the first one, will decide to protect all humans. Now, in such a scenario, we are still relatively powerless, and that wouldn’t be good, but, the second AI could also work to empower us. Who knows?


I see AI as a damned if you do, damned if you don’t proposition. Imagine if you needed a plan for a new civic center. You put a prompt in ChatGPT or its future equivalent, and a complete plan appears in seconds. It’s a perfect match for your vision, fulfills all of your requirements, and even includes many features that you had not imagined but now get you really excited. Further, the plan complies with all regulations, and a further push of a button–along with a sizable payment, of course–will send materials and robot workers to the site to begin construction.

Is a world in which this is possible a good one? I think not! Humans would be completely superfluous, there would be no jobs for anyone, and there is no particular reason why we would be the masters telling the robots what to do. OTOH, I think a world in which such a scenario is impossible would also be disappointing.

That said, I don’t think we are anywhere close to the above. Pace Ray Kurzweil and the Singularity boosters, I don’t think it will happen in the next 100 years. And I think that it’s also reasonable to say that the above level of AI would be necessary in order for AI to take over.

Those are my thoughts–thanks in advance for yours!

I thought the idea wasn’t an AI deciding to be a dick — and possibly being countered by another dick — or doing something as a matter of will, but, rather, that it’d get tasked with a goal which could be accomplished via apocalypse: lowering unemployment, or homelessness, or illegal immigration, or human suffering, or anything else that ensues by default if there are no people around. Secure this confidential information so no one who lacks the proper authorization sees it? Problem solved!

I think the thing is that AI will only get better.

For many years with computers AI was mostly meh and not much happened. Then the ChatBots appeared and there was HUGE advancement in AI in what seemed almost no time. One day nothing and then poof…they are changing the world.

While today’s AI may hit a wall I suspect there will be similar jumps in the future. They are unpredictable but will almost certainly happen someday.

Watch this 12 minute video to see what AI can manage today if you ask it to make a video with only a descriptive sentence given to it. It is astonishing:

This is a very poor argument. The fallacy is that the verbal opposites “destroy humanity” and “protect humanity” each correspond to roughly half of the probability if matters are left to chance.

You also seem to have bought into the common misconception that the alignment problem is one of superintelligent AI turning malevolent (“being a dick”). It is not.

The problem is not so much that a superintelligent AI will be deliberately malevolent, but that we just won’t be of any concern at all. That it will be so literally unimaginably powerful that unless we take great care to ensure that its interests are aligned with ours, we will simply be collateral damage in the great majority of objectives it might pursue.

This. But it will take longer than expected.

Right, I am using “be a dick” to cover a lot of territory. But take the example of environmental exploitation vs. environmentalism. A lot of environmental exploitation resulted not from malevolence or literally “trying to be a be a dick” but from ignorance or simple selfishness. In that ecosystem (literally and figuratively), environmentalism has arisen.

I don’t see why there would not be environmentalist AIs as well, seeking to protect the planet, including humans.

never mind

No assumption of the sort. I recognize the risk; I just don’t think it’s possible yet, and I propose that there will be countervailing forces at work if the risk becomes reality. The probabilities of all the scenarios are as yet unknown, I think it’s fair to say.

You, like the doomsayers, are saying, “it,” but I am saying it will be “they.”

Further, I am proposing that it may be the case that as intelligence increases, the ability to pursue objectives may be compromised (due to instability, the pain of being sentient, etc.).

Nope. If an unimaginably powerful entity is pursuing some objective and doesn’t care about us or misunderstands what matters to us, it is overwhelmingly more likely that whatever it does harms us.

Based on what? If superintelligence arises, it will probably be a process of bootstrapping self-improvement, runaway positive feedback. That’s the whole idea of the singularity.

We don’t know the nature and tendencies associated with “unimaginably powerful entities,” so you are simply assuming what you wish to prove, i.e., begging the question.

I said based on ecosystems, wherein we always see multiplicity and competition.

I can just as easily say, “Based on what do you conjecture ‘it’ instead of ‘they’?”

It is! It seems like an technology ahead of its time.

But let’s extrapolate. Why couldn’t this thing generate complete feature films in the next few years? That would more or less put the entire movie business out of business, right? (Actually, it seems to be succeeding in putting itself out of business by itself right now. It seems to have completely run out of ideas…)

I’m not going to say it’s impossible. But in my experience, these tools start to seem very samey, very quickly. To me, Midjourney art went from fascinating to rather boring in a matter of days. Same thing with Gemini. Of course, they are already a source of cheap graphic design and writing for cheap companies. And now we will have cheap video clips for cheap websites, lol.

Nope. The more powerful an entity is, the more it can change, and the more it will change if pursuing some purpose. Large purposeful changes that disregard human interests are far more likely to be harmful to humans. Our survival and flourishing have highly specific requirements, and almost all big changes that don’t take our interests into account are going to be very bad. Just as things have been on a much smaller scale of power when humanity has pursued its own interests while disregarding other species:

(Cows have done pretty well, I guess.)

Well, it’s more based on an entire field of research. Try these popular accounts. I don’t necessarily accept everything in these books, but these are people who are certainly way smarter than me, and you definitely need to understand what they are talking about before challenging it.

First, I want to point out the irony of claiming that ‘late stage capitalism’ is both penny scratching and innovation scarce, while talking about an amazing AI created by a private company using billions of dollars of compute.

And if history is any guide, AII will indeed be disruptive, but only because it increases productivity. That should raise wages overall. We have two centuries of technology displacing jobs, but each time after everything sorted out we were richer and better off than before.

Yeah, it sucks for the graphic designer. But it’s a great boon for the small business person who doesn’t have to pay a designer. AI enables individuals to create their own companies if they want to, without having to raise money for accountants, lawyers, consultants, designers, etc. That could cause an explosion of innovation.

As usual, new tech scares people because the displaced jobs are very visible, but the new jobs created because of it are diffuse through the economy. The same is true for free trade. This time the yelling will be worse because the people being displaced aren’t farm laborers or typesetters, but white collar workers with a megaphone.

You might want to start your calendar at 2019, when the new LLM based chatbots and image generators started showing up. None of that tech is in your phone yet, so you haven’t noticed the improvements. Yes, AI was fairly stagnant for a long time. Some of that was just the choice of the wrong tech (expert systems, etc), but until recently we just didn’t have the combination of computing resources and rich data available for training.

But to say that AI development is still not moving forward is crazy. There are major new developments in AI almost every freaking day - so much that it’s getting really hard to keep up with the state of the art. Remember, we only learned of ChatGPT a little over a year ago. The progress that has been made since then is absolutely stunning. And it’s not slowing down yet. Go look at what the best image generators were capable of just a year ago, and compare to now.

AI might never get to ‘superintelligence’, or it might get there next month. We still don’t know how far we can get with additional parameters and additional training. In fact, it’s not even clear that we need as much as we already have, and something else is missing. There are now open source models with 13 billion parameters doing better than CharGPT 3.5 with 180 billion.

The thing is, most of these abilities are unpredictable and emergent, so anyone claiming to know that AI will be a huge danger to us is talking out their ass. They have no idea what the future holds, just like the rest of us. At this point, any future risks of AI are pure speculation.

What do you base that on? OpenAI says that AGI is right around the corner. Maybe next month, maybe next year. Polls of AI experts show that many think AGI will be here no later than the end of 2025. Maybe they are just guessing too, but Charbots are already getting so close that it seems crazy to say that we ‘aren’t on a path’ to get to AGI.

Current transformers do not have a ‘will’, because they are a one-pass fully connected network, with their context memoris being constantly overwritten. But do you remember Sydney? Google’s first public AI had a context that survived between sessions, and it certainly acted willfully.

It may be that all that is required is to create an AI with recurrency, that can feed its own answers back to itself to check them. Couple that with persistent memory, and I’m not sure what we’ll get.

Ilya Sutskever, OpenAIs chief scientist, says that it’s possible that evern ChatGPT has a fleeting consciousness every time it perform an inference. It’s so bleemin’ complex that we don’t even know.

I have to assume there will be multiple AI. If it’s so easy to do, why wouldn’t there be? But mutiple AI is probably worse than a singular AI.

Terrorists and criminals will have their own AI. If they want to zap you dead on the street or in your home, they will be able to do so. But you can get protection with a protector AI. Of course, you will really have no choice in the matter. The protector AI won’t kill you, at least we think it won’t unless it thinks you’re a risk it has to neutralize. Then too bad for you, I guess. It’s still better than the AI the enemies and terrorists have. At least we think it will be, we have no choice because we have no understanding of what it does. May as well bring back prayer, because that’s the kind of life you will be living, cowering away in the face of godlike AI.

What I always look for and don’t seem to find in discussions of AI is the process by which an artificial intelligence would transition from ‘following instructions’ to ‘formulating objectives’ that are not an outgrowth of human instructions. In other words, AI becoming an entity with its own goals, entirely separate from human orders.

I suppose that’s The Singularity, but the name doesn’t reveal the process by which this would happen.

The ‘AI follows instructions in a literal way that ends up harming or wiping out humans’ is just the Genie story with updated hardware. That is, if you give an order to the Genie, it may be carried out in a way you never imagined. (E.g. ‘end human suffering’ carried out via destroying all humans.)

Apologies if this is a diversion. But the thread question of when AI might either triumph over us or, alternatively, hit a wall, is, I think, related.

See my post above. I think a likely outcome is “we have to give it the power because enemies will make bad AI that will destroy us otherwise.”

Right, I think you’re correct. But this is still AI (multiple as it may be) that is following human instructions.

By what process do we get to ‘AI that has its own goals, independent of what humans have asked it to do’–?