…which you access via an enormously cumbersome I/O process involving your spinal cord, motor neurons, fingers on the keyboard of a computer, etc.
A true General AI is pretty certain to be dangerous to humanity, quite probably an existential threat. We’re busy making something other than ourselves that will occupy the ‘apex entity’ position on our planet. We’re inventing something that will regard us in the way we regard the other organisms on our planet that are not humans - even if we find those things important or worth preserving, we still treat them in ways that are pretty ruthless (such as culling individual animals to maintain a sustainable population of deer, or growing apple trees in regimented rows for easy harvesting, or imposing quotas on fish stocks so that we can carry on eating them).
How dangerous general AI would be is indeed the big question. As I perceive it, you have all sorts of views in this debate, ranging from pessimists like Yudkowski to optimists like Kurzweil, and in-between positions like Bostrom (who has illustrated the situation we’re in in his “unfinished fable of the sparrows”). It’s a fascinating debate, but it’s quite contaminated by Hollywood ideas from films such as Terminator or The Matrix.
I read a story where a space ship’s AI would play poker with the human crew. Not the main thrust, at the end of the story it tried a bluff and failed. “Hey! Having no money, you’re not allowed to bluff!”
“I have some money, now.” While the crew had been busy planet-side with the main plot, it had sold some scrap parts for a bit of cash.
That’s one of the David Falkayn stories by Poul Anderson. I forget the name of that specific story, though. And as I remember, it was much more than just a bit of cash.
Normally, I am not a pessimistic person, and I honestly, don’t think it’s so much Hollywood that has influenced my view, as more serious technical discussions about alignment and AI safety.
We’re trying to make something that has the likely potential to overpower us; nearly every natural example of such relationships that I can think of, results in the less powerful one sometimes getting consumed, or at the very least exploited or subjugated - whether it’s power imbalances between humans, or humans vs nature, or nature vs itself.
We basically have to invent Superman - powerful, yet unwaveringly benign.
In all the “AI’s will go insane” scenarios we have thinking machines that are continuously running cognitive processes. As far as I can tell that is very rarely the case for current algorithms. An image tagger isn’t dreaming of electric sheep, it’s idling in a simple and predictable loop while waiting for input. Do even the more general “we let this chatbot read all of twitter” or “we gave this animatronic manikin head citizenship” projects have something that thinks about its previous experiences when it isn’t receiving novel input?
These are the scenarios that Asimov’s “3 Laws” were designed to prevent. Basically, do no harm and then do what you’re told.
But his stories were basically about how even these laws would not necessarily stop evil intentions or misguided motivations sometimes.
The big problem I see is basically autonomy. You give a conscious entity the power to make their own decisions and self-direct their actions, and you have no idea where it takes them… as many parents will tell you,
Now, if the robots rise up against those who would make them into docile slaves, is that even a problem? Or simply justice?
This is the point of our planned conscious AI. it is ready to respond to input at any time - i.e. let’s say it would be for example, the bot that responds to phone enquiries and email, so any random event within its purview would wake it up. Then, it presumably can decide when it’s finished “thinking about” its task and put itself to sleep. Plus if the AI is building a wide experience to develop that most important part of a varied cognitive ability - common sense - it will be up and running for a long time building an appropriate model of the world on which it can hang any future interactions. Part of its interactions would be an understanding of its task - so it would realize “as long as I don’t send this last email, I can stay up…”
Would consciousness by definition mean it would be able to think “outside the box” of it’s limited assigned tasks? Would it then develop a discernable personality with quirks of behaviour? and so on…
Even a notionally well-meaning and fully obedient general AI could still routinely destroy us by giving us exactly what we ask for, rather than what we thought it was we wanted, simply because it’s very difficult for us to specify all the things we don’t want to happen, in pursuit of the stated goal.
If the task is ‘make a cup of tea’ and we haven’t specified ‘don’t murder children’, and the agent internally models a way of making a cup of tea that would be 0.00001% more efficient, if dead children are used, it will murder children to make a cup of tea. Try to stop it killing children, and it will resist, because if you stopped it, that would make it less effective at making tea.
So you anticipate that and frame the task as ‘make a cup of tea, but also place value on not murdering children’, and off it goes; this time, it models a method of making tea that will be more efficient if it burns cats. Try to stop it burning cats and it will resist, because you specified that the goal was to make a cup of tea, and it wants to do that in the most effective way possible.
Computers are tireless slaves that can at best act intelligent.
However we might somehow create a conscious machine. In that case it would have ‘machine consciousness’. We have no idea what machine consciousness is or even if we could recognize it.
Yes, this goes back to my other point - if we created a conscious thinking computer, how would we tell? There are plenty of humans where it’s hard to tell if they are actually thinking, or simply performing rote work and occasionally reacting randomly.
Even back in the 8088 days, a program like ELIZA could appear to be having a conversation, if the person at the other end was not attuned to the process it was using. (It would reword what it was told into a question to elicit further conversation.) Or until it got into an area it could not handle. Again, like some lesser functioning humans.
Yeah, we can’t know if anyone else is conscious or a philosophical zombie. We can tell if something is intelligent though, because that’s just about what it does, not how it does it.
I mean, within reason. Rocks could be intelligent, and just choosing to sit there doing very complex maths in their rock heads, silently.
For some reason, this thought pleases me.
They married in the old way, by themselves over the spring where the stream began and came back and told the household.
…
The two of them were gentle to each other, not that they lived together thirty years without some quarreling. Two rocks sitting side by side would get sick of each other in thirty years and who know what they say now and then when nobody’s listening.But if people trust each other they can grumble, and a good bit of grumbling takes the fuel from wrath. Their quarrels went up and burnt out like bits of paper, leaving nothing but a feather of ash, a laugh in bed in the dark.
–“Gwilan’s Harp” Ursula K. LeGuin
I understand that in caveman times, one tribe won a war against the other by using smart rocks as weapons?
“A rock knows it is a rock”. Aristotle, Posterior Analytics
Aristotle, Posterior Analytics
Does that title mean he pulled his analysis from his posterior?