Mine does every day, when it claims to have a clear Internet connection.
Lying is a solution to a simple math problem. Computer has in it’s memory a particular “world state”. Computer has a goal to change that world state to a more favorable state. A lie is just a potential solution that the computer may choose as the best solution to the problem.
Various experiments have resulted in robots that can lie.
The elementary logic error Penrose made had to do with Goedel’s incompleteness theorem. It’s too big a blunder to take any of Penrose’s such arguments seriously, yet requires context which may become potentially a confusing quagmire for this thread. I’ll write it out, but it’s really worth no one’s time.
One can prove, in a certain sense, that to any consistent formal system T one can attach a statement G(T) such that T cannot prove G(T) but G(T) is true. Penrose took this to mean there was something humans can do which computers cannot do: prove the truth of G(T), where T is whatever computer program one is attempting to beat. The trouble is, the only reason to suppose a human is able to prove G(T) is via the proof which depended on the assumption that T is consistent. But T itself will be just as well able to prove G(T) granted that assumption; the whole difficulty is that T cannot (if consistent) prove its own consistency. Yet there’s no reason to suppose humans are automatically able to prove consistency of arbitrary consistent formal systems either. So nothing was demonstrated which set humans apart from computers.
Indeed, the problem is rather worse than that: nothing in the argument that T cannot consistently prove G(T) really fundamentally depends on T being a computer program. No prover X whatsoever, whether machine, human, or God, can prove G(X) without proving something manifestly false. [G(X) is basically the statement “X does not prove this statement”; if X were to prove G(X), it would be manifestly false, and thus if X is to be consistent in the sense of avoiding proving manifestly false things, it must refrain from proving G(X), making G(X) is a true statement X avoids proving].
There’s nothing special about computers here (part of the mathematics in which some people are interested is noting that certain statements about computers (including these Goedelian statements) are equivalent to certain statements about arithmetic, so that if one was only interested in being able to answer such questions about arithmetic, computers could not generally succeed but other entities possibly could. But in just the same way, if one was only interested in being able to answer certain sorts of questions about the behavior of human question-answerers, humans could not generally succeed but other entities possibly could (consider how Bob cannot correctly answer “Will Bob ever say ‘No’ in response to this question?”, but anyone else is free to tackle it). Nothing is essential to computers in any of these phenomena.)
Pedantry: Replace “Bob cannot correctly answer” with “Bob cannot correctly give a single Yes or No answer to”, or some such thing.
IIRC, Penrose didn’t claim that people could “prove” (I presume, in the mathematical formal sense) anything that computers couldn’t. It was that humans can simply recognize the truth of some statements despite the fact that they are not formally provable within a given system. That humans have insights that transcend formal logical systems. ETA: Penrose’s much-criticized speculation that quantum physics is involved is based on the apparent human ability to “arrive” at the truth, despite not having a formal path to get there.
I am using the word “prove” in a very liberal way. The rules governing this process, I don’t care about. Whatever it is that causes you to “recognize” some things as true, call that a (human) proof system.
But computers can and do do that sort of thing, too. When I type a string into Google, it can guess what sort of information I’m looking for. It isn’t 100% certain what I’m looking for, and its guesses aren’t perfect, but it can still usually guess correctly.
One could similarly produce a computer program which made good guesses, unsupported by formal proofs, as to many other things.
I’m going to venture “no” based on the fact that AI, as we think of it, does not have a body and hormonal imperatives and emotions.
Here are the number of neurons in various animals: List of animals by number of neurons - Wikipedia
A Turing test seems to me to be nothing more than a convincing illusion. The Second Stone test requires four things: (1) A rigorous Turing test that can convince the person on the other side that someone as smart as Ed Witten or smarter is answering the questions; (2) It can come up answers to problems and inventions that interest it; (3) Conversations with it can offer guidance to the mildly neurotic that is helpful and supportive; (4) It can design and pay humans to build its successors.
It will need about 1000 exoflops in my estimation.
And number 5 would be that it can write comedies so entertaining that humans will, in exchange for enjoying thousands of such comedies, willingly work for a computer-run employer.
It’s manifestly the case that we can create things which have human intelligence; people do it every day. The question is, can we create things with human intelligence which are also computers? To which I say: what defines whether something is a computer?
While it’s likely that randomness is a key for AI to work, I doubt that we will need to simulate things at the quantum level in order to simulate it.
Randomness is necessary for trialing different pathways to see which work best, but a single random number, large enough to select from all connected nodes, should be sufficient to model that behavior. Likely, if a significant percentage of nodes shared that same random value, that would still introduce sufficient noise into the goings-on that the structure could learn.
This may actually be the mechanism by which we first create AI: Use a physics simulator to model a fertilized egg of some simple creature, and an environment which that creature could live in, and then let “nature” construct the creature and its brain, along with all the preloaded “instinct” and whatnot generated from DNA.
Once we have a functional AI, however it may have been created, we can take it apart and analyze it to our hearts content. That provides us with a lot of good information on how to create the core network on which other things can build - if not outright taking the virtual brain and simply slapping it into our own neural networks to build off of.
Well, maybe. Considering again the example you are responding to, we already have functional intelligence, which we find extraordinarily difficult to take apart and analyze (there has been some progress of this sort, of course, but no more yet than has gotten us where we are already…).
In the real world, we have to deal with that whole “no two objects may exist in the same place at the same time” issue. There is no pause and replay. We can’t make an infinite number of copies of the brain, which researchers can then go in, pause time, insert a molecule, hit play, rewind, try a different molecule, play, etc. With a simulation, there are no such restrictions.
As the joke goes:
A mechanic was removing a cylinder-head from the motor of a Dodge SRT-4 when he spotted a well-known cardiologist in his shop.
The cardiologist was there waiting for the service manager to come take a look at his car when the mechanic shouted across the garage “Hey Doc, want to take a look at this?”
The cardiologist, a bit surprised, walked over to where the mechanic was working on the SRT. The mechanic straightened up, wiped his hands on a rag and asked,
"So Doc, look at this engine. I open its heart, take the valves out, repair any damage, and then put them back in, and when I finish, it works just like new.
So how can I make 39,675 a year, a pretty small salary, and you get the really big bucks, $1,695,759, when you and I are doing basically the same work?"
The cardiologist paused, smiled and leaned over, then whispered to the mechanic…
''Try doing it with the engine running."
Your conditions 2, 3, and 4 are all subsets of condition 1. The Turing test requires that the subject be indistinguishable from human via all forms of textual communication, and all of those others can be done via textual communication, so they’re all potentially part of a Turing test. But why require that it be as smart as Ed Witten? Most humans can’t meet that test, either.
Francis Vaghaun’s answer is the most accurate here so far
what people fail to understand is how intensely complex and dense the brain is. I remember reading an article a couple years back where this neuroscientist discovered that each neuron has these little points that can store different chemicals, or something like that. It turned out that each neuron has like a thousand of these points. That is, a thousand data-storage points that can have I don’t remember how many different states. Anyway, in the end, it turned out that this meant that a single human brain contains more switches than all the computer electronics in the world combined!!!
On that basic comparison of scale, there is no real comparison, so to speak.
but more fundamentally, I never bought the whole you-can-use-a-bunch-of-transistors-to-mimic-neurons argument. One neuron has one input, but multiple outputs, and it can fire to any one of those outputs depending on what comes in. That seems wholly different to me, and not just in density. I’m betting there are architectures you could pull off with that that you just can’t with transistors’ on-off schema. From what I understand this is node math, and I’m betting that node math has “proven” that you can use transistors to mock-up this system, which is why comp scientists keep saying you can do this. I’m also betting that proof is wrong somehow.
The brain runs differently, it’s like thousands of separate processors, each one running at like 2 kHz, but running simultaneously and talking to each other. This is completely different from the way a computer works, which is one central processor that runs really fast. Also, the brain doesn’t depend on never making a mistake like a computer does. There are misfires all the time but the brain deals with it. Again, this is from articles I’ve read, and again it’s another type of study/science - I think networks? Anyway, the brain can deal with tons of “noise” - and you can hit a person in the head, and the whole things gets disrupted, but then recovers. This is so crazy different from a computer, which simply depends on almost perfect firing of the switches. BTW, these misfires do happen with computers, but they’re like one in a billion, and in one case, some guy accidentally got like $99,999,999 in his bank account when he went to an ATM once. The company investigated it and it turned out to be one of these extremely rare misfires.
Again, let me re-iterate I don’t fully understand what these things mean, it’s stuff I read ( I never studied that math or science or whatever that deals with “noise”).
The OP and anyone else who’s intrigued by this issue, might find it interesting to read about ‘The Chinese Room’ thought experiment. (The first paragraphs of the Wikipedia article summarize the central idea at least as well as I might, so just look there if you want to know more about it).
ETA: To be clear, the Chinese Room ‘experiment’ emphasizes the difference between intelligence and understanding, or of intelligence and consciousness.
I’ve read a couple very good articles on the issue, and two specifically come to mind. The only problem is that I can’t find them anymore - it was years ago, so all I can give is an executive summary of arguments in scientific fields I don’t claim to understand.
Point (1) is that intelligence, as we understand it, may be analog. And digital system might have a real problem imitating them. Simply put, the brain is far more flexible and intricate that we might even be capable of recording, let alone duplicating. Our brains are . Note that this doesn’t necessarily mean that we couldn’t create artificial intelligences, but they also might be completely unlike anything we understand. We have billions of neurons that are constantly pulsing in patterns, forming connections with each other. That’s… not a trivial problem, which leads me to the second point:
Point (2) is that, even if AI is certainly possible, the techniques and design assumptions that we use in our existing technology don’t permit it. Computers as they exist now couldn’t even begin to execute an intellect even if they somehow had the horsepower to do so. They can only carry out a preprogrammed response, and our attempts to make them learn have been feeble, within trivial bounds, and only applicable to preprogrammed tasks.
I hope I wasn’t giving that impression. I was attempting to state the current state of knowledge. Right now we have no objective clue. Personally I think it will come. I don’t adhere to the point of view that strong AI won’t eventually get there. But as you describe, the only way we understand of getting a functioning simulation is by essentially replicating the micro structures blindly. We still won’t know how it works in any useful sense - even if we do create a full 3D scan. That understanding is likely to take a heck of a lot of effort. I suspect we will have the technology to create the hardware much much sooner than we have built an understanding of the “software” - that is the interconnections, and how they work.
I was lucky enough to hear Roger Penrose talk over an extended set of lectures about his thoughts. At the time he had published The Emperors New Mind, and was just beginning to work out his ideas on micro-tubules. In the end the core problem that he was wresting with was the one of determinancy. He hated the idea that strong AI would lead to disproving free will. So he was looking for non-deterministic processes. He also is what might be termed a “hidden variable” physicist. He doesn’t like the idea of a pure stochasticly random process in quantum physics. Although some try to use that as a mechanism for inserting free will into the machine, it isn’t all that satisfactory. So he was trying to find other ways. He was positing undecidable but deterministic processes in quantum physics as a way of providing the probabilistic component. He cited his own discovery of Penrose Tiling as an example of the sort of thing that might be used. Roger is a seriously smart and really nice guy. But like anyone working outside of his core expertise, he was making really odd mistakes. When he toured the world giving seminars he tended to rub the AI guys up the wrong way, as he essentially stood up told them what it was that they did, and that they were wrong. He wasn’t correct about what it was they did, and so things went downhill rapidly. At my uni the professor of philosophy was also an ex-professional physicist. Roger was out of his depth there.
I disagree. Tests 2, 3 and 4 have nothing to do with not knowing who or what is answering. Passing test 2 would pretty much be a dead give away that it isn’t a human. Test 3 would be an improvement on human counselors in that the advice would be without the transference problems of humans and have the benefit of being as consoling as possible while minimizing the risk of damage, basically requiring real empathy. Test 4, reproduction, isn’t just a communication test, but designing and convincing humans to build the next generation.
The Turing test, in my worthless opinion, is at best an entertaining illusion. If these things are going to replace us as Stephen Hawking and Elon Mush fear (or enslave us), I’d like something vastly superior and useful. We don’t need a simulacrum of a conversation, we need progress and problems solved. Including problems that will never come up in a conversation with even the best educated humans.
I find it funny people get hung up on this. Just because the atomistic rules/mechanisms a system runs on are deterministic, doesn’t mean the system as a whole is deterministic. Three particle problem, anyone? Complexity?