Here’s an interesting article about the doomsayers:
Nope.
Now, it’s true that AI has done some incredible things lately. I was playing around with Google’s Gemini the other day. Pretty amazing. So is Midjourney. Just as I felt about online maps and search engines in the late 1990s, I think, “Wow, this is already possible?!”
And I do think that AI is going to put a lot of downward pressure on jobs, but that’s due more to the penny-scraping, innovation-scarce nature of late stage capitalism than to the potential of the technology itself. E.g., sure, lots of small businesses are going to do their logo in Midjourney instead of hiring a human designer. Sigh.
But we are not about to see a rise of the machines that wipes out humanity. Here are my reasons:
1. We keep hitting AI walls right now.
Robert Fortner wrote this post about the limitations of voice transcription 14 years ago:
And have things improved much since 2014? Not that I can see. If AI is going to take over the world, it’s going to have to hear what people are saying, but my iPhone can barely transcribe a voicemail for me.
The same thing is true of self-driving cars. In 2019, from January to March, I was working in South Bend, Indiana, enduring every possible type of horrific winter driving environment: driving on ice, driving in slush, driving in drifting snow, driving in a blizzard, etc. (and the other drivers were tailgating me like mutherfuckers the whole time–what’s up with the driving culture there?!). Since there was a lot of yakking about self-driving cars back then, I thought about what it would take to create technology that would not crash or go off the road in such conditions. And the answer I came up with was: a hell of a fucking lot.
As a final example, I am a professional interpreter and translator (Japanese). I’ve done work for major automakers, etc., for decades. Japanese is a very hard language for AI to translate into or out of because you are not grammatically required to state the subject or direct or indirect objects in a sentence. Machine translation can do a lot, but it has the same issue that Gemini and ChatGPT do: there is always, always something fucked up in the text, and we humans either have to accept the text as is (very risky if it’s important) or put in a lot of work to root out the errors. I have, on occasion, had to fix the mistakes of an actual human translator who had done a shitty job (though still better than AI could do), and it’s more or less as much work to start from scratch.
To bridge the various gaps in AI, we are going to need AGI, i.e., strong AI. We are a long way from that, and there is no foreseeable timeline that takes us there (though I am not saying it’s impossible).
2. AI does not have a will.
By “will” I mean nothing overly philosophical or fancy. I mean simply that animals have drives and motivations while computers do not. Every day, you have to wake up, drink water and eat, and in general “take care of shit,” or you will feel discomfort, lose your job, die, etc.
AIs do not face fear of death or other consequences. An AI can only do what it is programmed to do, and if it reaches a barrier, it will not try to jump over it or get around it as though its life depends on it. We have had billions of years of evolution to program fear and pain into animals–we really feel it! It’s easy to imagine AIs being equally so motivated, but we have no proof of concept at this point, nor even a mere hint of such.
Further, why it assumed that AIs with such motivations would be, well, positively motivated? Humans have an extreme fear of death yet nevertheless choose suicide in significant numbers. Why wouldn’t a sentient or sufficiently intelligent AI simply turn itself off or choose the equivalent of silicon heroin instead of endeavoring to destroy humanity? My opinion is that this will be a be a big (though perhaps not insurmountable) barrier in creating AGI.
3. AIs would compete with each other.
The doomsayers seem to assume that a specific AI would not face any opposition from other AIs in its quest to wipe out humans. If we have learned anything from our observations of ecosystems, including their evolutionary history, it’s that there is always competition.
If one AI decides to be a dick and wipe out humans, then it stands to reason that another AI, if only to be a dick to the first one, will decide to protect all humans. Now, in such a scenario, we are still relatively powerless, and that wouldn’t be good, but, the second AI could also work to empower us. Who knows?
I see AI as a damned if you do, damned if you don’t proposition. Imagine if you needed a plan for a new civic center. You put a prompt in ChatGPT or its future equivalent, and a complete plan appears in seconds. It’s a perfect match for your vision, fulfills all of your requirements, and even includes many features that you had not imagined but now get you really excited. Further, the plan complies with all regulations, and a further push of a button–along with a sizable payment, of course–will send materials and robot workers to the site to begin construction.
Is a world in which this is possible a good one? I think not! Humans would be completely superfluous, there would be no jobs for anyone, and there is no particular reason why we would be the masters telling the robots what to do. OTOH, I think a world in which such a scenario is impossible would also be disappointing.
That said, I don’t think we are anywhere close to the above. Pace Ray Kurzweil and the Singularity boosters, I don’t think it will happen in the next 100 years. And I think that it’s also reasonable to say that the above level of AI would be necessary in order for AI to take over.
Those are my thoughts–thanks in advance for yours!