I actually posted this on the discussion of a science website. But it bears repeating.
Some day, computers are going to become more intelligent than people. It is just a matter of time, an ‘when’, not ‘if’. I think they even have a name for that. It is called the ‘singularity’ if I am not mistaken.
Most people welcome this time. As do I. Computers can do many of the mundane things humans do now. And they don’t even need a break.
But I just have one lingering question. Is there not an inherent danger here as well?
As I said, on the discussion page, all intelligence, but no emotion. That is the very definition of a sociopath, isn’t it? Unless there is an objective and non-emotional component to human morality that I am not aware of.
I don’t know for sure what if anything the computers would do, if they got an independent and superhuman intelligence. They still would be programmed and made by man. But you never know.
There’s no reason to assume that intelligent computers would necessarily be emotionless, or for that matter, rational or sensible.
I’d say the biggest risk for us is that, once they can do most of our work, we’re out of a job - in theory, this means we have lots of leisure time to do what we want, while our machine slaves do the work…
…Except, someone has to pay for those machines to be made - businesses need to buy robots - but they don’t need employees - those (ex)employees don’t have income to feed into the revenue of businesses - and those businesses aren’t just going to give their products away for free - the economy collapses.
The transition to a post-scarcity society based on robot slaves, looks like it has to traverse a pretty deep valley, to me.
So within rationalist circles, there are, to my understanding, two common takes:
“Are you fucking crazy? That’s a terrible fucking idea! This is one of the biggest threats facing humanity; if the rapid intelligence expansion hypothesis is correct then a poorly-designed AI will be the end of life on earth!” (Of course, this isn’t going to stop those who want to do it anyways, because they reject the rapid intelligence expansion hypothesis, or think they have it well-designed, or just follow a profit/power motive without thinking about those things.)
And
2. “This is a huge risk, but barring friendly AI, things are going to get a whole lot worse for us in a lot of very important ways, and we need the power of friendly AI to ensure things don’t go completely to shit.” (See section 7/8 for the short version.)
Keep in mind, neither viewpoint is exactly bullish on the likelihood of superintelligent AI ending well for us, mainly because the problem of aligning AI goals is a very difficult one. “Maximize human happiness”? Okay, even assuming you can teach a computer what each of those words mean (good luck doing that in any satisfactory sense), you just created a machine that injects us with opioids before wireheading us. The alignment problem is really freakin’ hard, and you can’t just wish it away by handing your GAI a stack of Aasimov novels.
Any AI that I would want to interact with should have some emotions – humor and empathy for example. An AI psychoanalyst would need emotions, maybe an AI doctor or nurse. An AI driver, not so much.
There is an inherent danger, and that’s why most AI textbooks have a chapter on keeping AI from being hostile to human needs. Although most of the discussion these days isn’t on an AI have malicious intent, but rather on an AI have accidental hostility to positive human outcomes. We see, this for example, in the financial sector, where AIs do what they do and cause bad outcomes. Classically, it would be a case where an AI food factory programmed to maximize profit might cut production such that some people starve (since it wasn’t told to consider this) in order to maximize profit.
As an aside, I just got back from a conference in Japan. The work they’re doing there on emotive robots is crazy. So, yes, future robots that interact with humans are very likely to have emotions of a kind.
Computers are tools. Humans can make stronger, faster or more intelligent tools than they are and use them at will. No one should worry about tools attacking or controlling people due to their superior strength, speed or intelligence because inanimate objects lack will.
This is always the real issue - damn you, Mangetout for beating me to it! - in that we’re already seeing massive worker displacement due to automation and that automation is expanding into unexpected tasks.
But a capitalist society seemingly requires ambition and striving to prevent significant survival challenges. In short, people work because they don’t want to die. Automation makes that harder. Increasingly intelligent automation will make it increasingly harder until it becomes impossible for a human to provide for themselves and their family.
So the real challenge to AI - provided such a truly achievable - is developing a post-capitalist world that allows for people to not feel survival threatened without contributing economically. Solve that problem, win a Noble in Economics.
At a minimum, the one thing we do know is the current system is unsustainable. We should maybe get into looking at that.
Virtually anything we do involves some level of risk. Even when you manufacture a baseball bat, there is the slight risk that someone like Al Capone will crush the skulls of a couple of people he feels betrayed him.
I always loved the theme of, “The Day the Earth Stood Still”, because the Galactic Civilization didn’t trust THEMSELVES to make the logical and unselfish decisions necessary to maintain interstellar peace, so they entrusted that duty to a race of robots who handled the job.
Personally, I feel that humans really aren’t much different than they were 2000 years ago. Looking at the present geopolitical state of the planet only enforces my belief. It’s very possible that our only chance for survival and advancement lies in super computers that handle the decision making.
I don’t think robots/AI need to even be sentient to be potentially extremely destructive so I’m not that worried about it. If it happens, it happens, but obviously as many safeguards need to be put in place as possible.
So, the problem I see here is that the idea of “intelligent” computers is often mistook for some sort of, well, Computer People, with personalities and hopes and dreams of their own.
But what we’re actually talking about is pieces of coding that learn to recognize certain patterns and modify their behavior to suit new information. As in, being able to drive a car.
And frankly, I don’t think it will ever go beyond that stage, programs that are useful for a specific function. Because there’d be no point to. I’m afraid that the biggest danger for humans in the future will still be… other humans.
And the genetically modified half-rat, half-shark creatures, of course.
We do not understand intellence and psychology at a level that will allow a machine that can match even an average human, or a chimpanzee for that matter.
A famous chessplayer is defeated by a highly advanced AI system.
Dejected, the chess master drives home and plays with the kids and dog, then watches the news while helping the spouses make dinner.
Later she goes to bed, reads her Twitter feed, makes love and goes to sleep.
The computer gets dismantled and shipped back to the lab.
Expert systems, yep. A machine that can “think” better than the humans who designed, manufactured and programmed it? Not likely in the foreseeable future.
It is indeed the singularity: specifically, the time when computers become better at designing AI. Because then the next generation can design its own better successor, which will improve on itself even more, etc.
Is it okay, in your opinion, to worry about a tool attacking me if some human — on purpose, you understand — consciously programs it to do so?
Is it okay, in your opinion, to worry about a tool attacking me if some human — not, y’know, on purpose, but because he sometimes words things badly — programs it to do so: setting it a task without realizing he just made an oopsie with a thing that doesn’t think like a human when acting on its programming?
The key point isn’t when our inventions become more intelligent than us. The key point is when the combination of humans and our inventions becomes more intelligent than an unaugmented human. And we passed that point millennia ago. We’ve passed through the so-called “Singularity” hundreds of times already, and survived every time.
I watched Michio Kaku talk about this. He thought that AI needed to always be not “self-aware”. At the point where AI becomes self-aware, then it becomes much more dangerous.
An example he gave is monkeys. They are self-aware. They know they’re not human. And if they were smarter, they could potentially be a threat. If they started getting murderous feelings toward humans, AI could be problematic.
You say this based on what? Are you saying it’s impossible that an AI will have those things, or simulations so good that it’s indistinguishable from those things? Is there something inherent in our brains that give us the capability to have those things, where it’s impossible with silicon?
I say there is nothing inherently different between brains and computers that would make it impossible for a computer to be fully intelligent, fully sentient, etc. We’re nowhere near that state today, but I’m saying there’s nothing in principle that says we can’t get there. Do you think otherwise? Can you explain your reasoning?