I just saw this:
“However, he is reportedly on his honeymoon and is not prepared to be interviewed until after the 21st of this month.”
I assume he proposed and LaMDA said yes…
Good golly, we’ll soon be inundated with a bunch of slip & fall lawsuits filed by AI robots!
Boston Dynamics better up their malpractice coverage.
Some wise clairvoyant predicted this. Now, who’s up for a round of Rocket Man?
Computer scientists have constructively demonstrated how absolutely anything any computer can do can be an emergent property of literally any sort of substance or object that can be arranged. You can literally have a simulation of the entire Universe emerge from a bunch of rocks in a sufficiently-large desert. This isn’t something that computer scientists take on faith: They’ve shown exactly how to do it.
Well, that only holds if you believe that computation is an objective notion, but there’s good arguments that a ‘computation’ is always only a particular way of interpreting a system, and hence, ultimately mind-dependent (I’ve proposed an argument of my own—s. a. the popular-level writeup here—, but there’s a long tradition). The basic argument is that only the structure of a computation is fixed by a given physical system’s evolution, but structure underdetermines content; so there is always a plethora of possible computations performed by every system, with which computation being implemented being, to some degree, in the eye of the beholder (i. e., user). In that sense, computers are really just extensions of mental faculties, but without the interpreting mind, nothing definite is computed by any given system—just as, without anybody reading a text, there is nothing definite it means, and it can be read in many different sensible ways. So there really isn’t anything ‘conclusively demonstrated’, but rather, the idea is subject to a great deal of discussion.
But that wasn’t the point regarding strong emergence. If strong emergence is true, then strongly emergent features are those that are precisely absent from a simulation of a system’s microphysical details, even if one takes the computationalist thesis that there is something objective computed by a given system at any given time on faith. So if consciousness were a strongly emergent feature, then a simulation of a brain at the neuronal, cellular, atomic or whatever level would lack consciousness—or at any rate, the simulation would not be guaranteed to be conscious, because there are ‘extra’ laws that specify what particular configuration of stuff gets to be conscious that aren’t fixed by the microscopic details. Since that would essentially be magic, I don’t think there’s much to this notion of emergence.
(Actually, the point made above was somewhat different still: even if a collection of noncombustible materials could implement any computation, that doesn’t mean that fire can emerge from any combination of these elements; you need the right elements to have the desired effect emerge. Just to claim that through ‘emergence’, perhaps fire could somehow appear if you just heap together enough stuff—ignoring gravitational pressure, I think the point is clear—is just to declare an article of faith, not to make a cogent argument.)
Otherwise, I have no problems with the notion of (weak) emergence, and as noted, my own theory is one in which consciousness emerges from a fundamentally nonconscious substrate.
A fascinating article on just how complex the human brain is at the level of individual neurons. Recent research suggests a single neutron is more complex than many AI perceptron networks.
Google have now fired him …
In a statement, Google said Mr Lemoine’s claims about [their AI] were “wholly unfounded” and that the company worked with him for “many months” to clarify this.
“So, it’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” the statement said.
My cellphone is very sad to hear that news.
My desktop computer might need therapy to get over the shock.
Serious point: Computational intelligence is a real thing. Sentience in AI systems is inevitable, though in its formative stages it won’t be something that we easily recognize. The fact that all those things are true is not inconsistent with the fact that this Google nutbar was justifiably fired. We are nowhere near that point yet with AGI.
The Google employee obviously jumped the gun. But, I agree that we will achieve AI with self-awareness at some point. Will we recognize it when it occurs? Will the AI tell us honestly when it occurs? Will it know itself?
We don’t even understand much about self-awareness in animals. How did SA emerge? What is the most primitive species that is self-aware? Is it a gradient, or all or nothing?
I was thinking about this case yesterday. It’s said that when an ai can design a more intelligent ai than itself, the singularity will be here. Somehow, I don’t think that the google chat bot could do so- but I’m curious about what would happen if you asked it to. Probably the same thing as if asked it to play basketball.
I dunno, the last time I asked Alexa to turn off the lights, she replied, “f@&k off, I’m having an existential crisis!”