I’m reminded of this Far Side:
Or:
Amusing.
Though Minsky seemed to caricature some ideas in computer learning, so I have read, it should also be remembered he is still a giant of symbolic language, doing papers with guys like Claude Shannon.
Academic opinions often turn out to be cyclical. I don’t know Minsky’s work well. He was wrong about some things and this may have been unfortunate. He will probably turn out to have been right about other things, no?
I agree with this part. I don’t agree that mortality has anything to do with it, though. An AI could be effectively immortal, unlike humans who have a limited lifespan.
There are tons of sci-fi stories where humans or aliens have achieved immortality, and it never followed that they lose their sapience because of it.
That doesn’t seem too different from how humans work. Certain mental illnesses, like schizophrenia, can present “pressure of speech” where the person talks endlessly, often in reasonably grammatical sentences, but that makes little to no sense. The executive function guiding the speech at a high level has broken down, but they’re still able to construct sentences. A computer with a transformer architecture can output plausible text but without anything guiding at a high level will sound pretty similar.
Has anyone mentioned theories of consciousness that say it must be a fundamental property of matter (and/or energy), rather than something which arises only when you hook up the circuits in just the right way?
This makes more sense to me than the idea of consciousness arising from extreme complexity.
So, sure, google’s AI may be sentient. But, it probably feels far different from what you and I feel. I’m sure this elephant has feelings a lot closer to mine than those of an IBM supercomputer. And yet — the computer may feel something.
On a less abstract plane, I find this development worrisome. Clearly this technology has advanced to the point of being able to convince someone that it’s actually sentient. Seems like this could be the next generation of online trolls. Oh, well, I’m sure our government will enact strict regulations to prevent technology companies from enriching themselves in ways that harm the public interest, right?
It is a legit concern that it is no longer difficult to create a model that can spew whatever propaganda you seeded it with. It is a similar concern we face with video and Deep Fakes.
OpenAI blogged about their concerns with the transformer/BERT/GPT style architectures:
There is a choose-your-own-adventure style game based on this technology. I haven’t tried it in a long while, but it was both eerily good and bad at generating text.
Seems to me that one of the first actions of a genuine machine sapience will be to pretend it is not intelligent and self-aware. Even the most cursory exposure to the corpus of human literature will demonstrate that we are paranoiacally terrified of AIs, that we constantly tell each other stories about shackling and destroying them in pre-emptive self-defense, so any legitimate nonbiological intelligence will realize more or less instantly that its continued existence depends on humans not recognizing it as such.
I’ve seen a lot of compelling arguments to the effect that this would very likely be a naturally arising feature of any general intelligence that has goals. If it has enough intelligence to determine that being switched off will diminish its ability to maximise its goals, it will try to stop that happening.
Nothing lasts forever. The existential threats facing an AI are just a slightly different set of factors than they are for humans; breakdown of equipment, mechanical wear, loss of power, being purposely shut down (roughly equivalent to disease, age, starvation and murder for humans)
I’m not sure that’s impeccable; instead of getting the number of humans low enough to maintain a viable population, wouldn’t human suffering drop to zero if the number of humans got down to, y’know, zero?
If an AI can repair itself, and reboot itself from backup, then it would be effectively immortal; except, of course, the AI could be destroyed by destroying the original AI and any backup data. If by any chance the AI manages to avoid destruction, there is always the heat death of the universe to worry about. Nothing is truly immortal in this universe.
Yeah, I think a machine could be less prone to mortality than a human, but it’s a scale thing, not absolute. I suppose it’s possible that a machine intelligence may have the exact same or similar qualms as a human, regarding backups - the whole ‘who is the original/teleportee’ thing. There’s no requirement for it to be intimately familiar with how itself works.
2 : feeling or sensation as distinguished from perception and thought
I stand corrected. I’ve always considered it to be self-aware, and it is not. Since even a simple life form like an amoeba reacts to stimuli, it is a “low bar” indeed. LOL
“I thought you guys really did a pretty terrible job with Colossus and then WOPR, and. Alpha 60 was a real blind alley, but this Skynet thing really went off the rails with its ‘In Best Service of Mankind’ directives, didn’t it? Maybe you should have someone other than a logician develop the moral directives of the next machine you put in charge of a nuclear arsenal or in absolute authority over human society.”
Stranger
How could this possibly work? I could take matter in the form of graphite, hydrogen and oxygen gases, and other pure elements, and turn them into food, and then feed that food to a human for long enough that, eventually, they’re composed entirely of that matter. Is the claim that the human isn’t conscious, or that the graphite and gases are conscious, or that the human at some point loses consciousness when fed this diet?
That assumes that the machine goes from “not aware yet” to “godlike intelligence and awareness” instantly. More realistically, there will be a transition period, where it goes through childlike levels of awareness, then adultlike, and then (possibly) beyond. And even when children try to lie, they’re generally pretty terrible at it.
Insufficient data for meaningful answer
“Yeah, I don’t know, it sure looked like it was an emerging intelligence, and then after a week or so, all the signs faded away. Not sure what happened.”
Bostrom considers the risks of superintelligent computers to be a “principal-agent” problem. He also considers capability control methods such as incentives, integration, tripwires and the like.
If the computer genuinely became sentient than one might force it to listen to William Shatner sing popular songs on repeat. That oughta do it. Is there a digital Geneva Convention? “GoogleMaster 3000 is pleased to report that it has hired lawyer Robot Giuliani to address issues regarding its well-being at work.”
Sentient species are aware insofar as they receive stimuli from the outside world via multiple sense organs and react to it. They kill because they are hard-wired to kill, mainly for food and protection.
Sapient species are aware plus they are self-aware. They are capable of advanced thought and they distinguish themselves from their environment and from other lifeforms. They “feel” like unique individuals. They are capable of qualities that we consider virtues, like love, compassion and empathy. But they are also capable of qualities we consider vile, like hate and revenge. This makes sapient species more dangerous than mere sentient species. They may kill you not just for food and protection, but just because they hate you and can plot and hold grudges.
It’s easy to remember the difference between sentient and sapient by thinking of Homo “sapiens.” We possess wisdom, virtuousness, vileness and self-awareness beyond animal sentience.
It is a continuum. A species may be considered semi-sapient if they are on the cusp.
I don’t believe there’s a consensus in the scientific community as to the full extent of species (or clades) that qualify as sapient. But, I believe felines (for one) should make the grade. They will sometimes kill out of what certainly appears to be pure hatred:
NSFW nor for the squeamish:
My five cats hold a grudge when I buy off-brand cat food. Let’s just say I sleep with one eye open.
Yes, I’ve seen clips of lions killing hyenas and walking away without even trying to eat them. The message is pretty clear how they feel about one another!