Artificial Intelligence and evolution

While I do agree with a lot of what you say I have to make a nit here. I do not think that the ultimate moral of 2001 was to warn about the dangers of our tools, there is a bit of it, but this is not the ultimate moral. IMO the moral was to also point out that our tools also did shape our evolution, and the point is also that while we have to keep vigilant and alert the new tools (like AI) are there to also guide our future evolution. IMHO 2001 A Space Odyssey is (a mythological tale) about the growing pains a high end civilization has to confront eventually, but it is a confrontation that is not really the end of us but also a beginning.

Kubrick did explain this a bit when once asked about if there were religious implications in his movie, Kubrick was more into the idea of Arthur C. Clark that “Any sufficiently advanced technology is indistinguishable from magic.”:

Sure. But let me start with a brief preamble. Turing’s idea was a brilliant insight in 1950 when no such thing was remotely possible with the contemporary technology. But today we are starting to approach the stage where one imagines that “AI kits” can be developed so that anyone can throw together the kind of chatbot that Microsoft and Google are developing. So what? I maintain that passing the Turing test today is a subjective frivolity that is neither a necessary nor sufficient condition to demonstrate true AI. That is, it’s not necessary because a lot of fantastic AI is utterly unconcerned with human conversation, and it’s not sufficient as evidenced by these stupid chatbots that have no useful function and are nothing more than glorified versions of the old ELIZA program running on modern hardware.

That’s why I think my robotics analogy is pertinent. There are real robots doing great things for us in terms of knowledge acquisition in space and productivity on earth, and their value has nothing to do with the extent to which they resemble us or might be mistaken for us.

I’d prefer a “Turing test” more like this: with the vast knowledge of the financial marketplace that you can gain as a highly intelligent entity, and my own particulars and life goals, tell me how to optimize my financial assets and, preferably, get very rich. This isn’t as superficial as it sounds. It’s a very difficult problem with a potential solution set that is of great value to me. There are perhaps a few people on earth that could solve it well, but the capabilities of machine intelligence could transcend them all.

In this case I actually don’t care to distinguish whether you’re an intelligent human or an intelligent machine. Solve the problem, and solve it in a way that defies all the other experts, and I will bestow on you boundless accolades of intelligence, because usefulness is what matters.

If that seems overly superficial then consider the problem in terms of health care, and the diagnosis of a complex case, and optimization of treatments. Here again, I don’t give a flying fig if the AI wants to chat with me. I just expect it to be effective, and understanding natural language is just part of its utility.

In short, I think the difficult matter of defining true intelligence has to be cast not just in anthropomorphic behavioral terms but in goal-oriented terms. Behavioral criteria can hide behind tricks, algorithms, and clever heuristics, but end results cannot. That’s why Deep Blue and the Jeopardy Watson were genuine achievements.

Yes it has already… (and called DeepMind) see:
http://boards.straightdope.com/sdmb/showpost.php?p=19319721&postcount=27

It just struck me that our problem might be a very old naming issue. We talk about Artificial Intelligence, but intelligence is not a yes/no kind of thing. We go from planaria, which I suspect would be easy to simulate, to social insects, to reptiles, to dogs and then apes. I don’t know if we could design something as smart as a dog, certainly nothing as smart as an ape, yet.
But the critical thing is that we really want to design artificial consciousness. That is what distinguishes us from most animals, at least. My old border collie used to outsmart me, but only I could think about him outsmarting me.
If we ever design a program which can examine its own thought processes, report on them, and improve them not through heuristics but at a top level, then we’d all agree we have developed AC. Until then we just create smarter and smarter programs which lack something.
What do you think?

Voyager: General agreement. True “consciousness” is the gold medal for AI.

Lots and lots of lesser goals are well worthwhile, but serious “I am” thinking will mark the true creation of an “offspring of the human race.”

(Another pathway would be genetic uplifting of dogs to true consciousness – and, by golly, there’s an argument that they are, right now. Dogs dream, and dogs can envision the consequences of their actions. Dogs have a moral sense; they know right from wrong. It’s my opinion that many dogs are truly and fully “conscious,” just as much as you or I, although we have a much richer suite of ways to express it.)

We’re still left with the dickens of a problem: how do we know when we have accomplished this? How does a conscious AI system tell us, “Yes, I am conscious?”

Maybe when it starts asking about life after death and whether it has a soul, etc…

Isn’t it vain to regard a house endowed with automation hardware and software as an intelligent home and to dismiss similar mechanisms in natural systems as gut reactions?

I don’t know if it’s “vain”, but it’s definitely wrong. An “intelligent home” is no more intelligent than a “smart TV” is actually smart. Those are just marketing euphemisms. Neither one possesses analytical problem-solving skills.

I disagree. There may be a gradual approach towards artificial intelligence but there will be a moment when an artificial intelligence develops a sense of self of sentience and it will not be slow and gradual, THAT AI will reprogram itself and be dramatically different than its predecessors.

I suppose it’s a useful term to describe a home that incorporates various forms of automation. I mean, I don’t say I have a “smart heater” in my home. It just runs off of a thermostat.

I think that makes sense. Really what we are talking about a computer that is able to set it’s own goals and agendas. Anything else IMHO, is just a sophisticated bit of pattern recognition and analytics software used to simulate intelligent behavior.

No. I’m contemplating a day when an AI that was supposed to be designing better computers decides on it’s own that it wants to do something else.

When you talk of AI, I assume you mean conscious AI, because anything else is just a complex mechanical procedure.

Someone on this thread called people who don’t believe in machine consciousness luddites, but I am a scientist and I consider people who believe in machine consciousness to be superstitious.

Anyone who thinks that science has any understanding at all of consciousness has never done any serious reading on the subject.

Physicalists (and most AI proponents are physicalists) consider consciousness as arising from a complex system of electro-chemical interactions in the non-conscious matter of the physical brain, and having no existence apart from matter. This is in fact an intuitively obvious way of thinking, because when we make physical changes to the brain, we change consciousness correspondingly. The rise of computers has also given a lot of strength to this view, because the brain can then be thought of as a complex computer.

However, it avoids the question of how it is possible for consciousness to arise from inert matter. Inherently this does not make any sense, and no mechanism has ever been proposed. Most believers of this theory seem to think that if you simply make a system more and more complex, eventually it will spontaneously become conscious. (e.g. the ‘singularity’ cult.)

There is exactly zero evidence for this. It has never been observed, and no intellectual framework exists to explain it. It’s nothing more than an article of faith.

In earlier centuries there was a theory of ‘spontaneous generation’. For example, it was thought that that maggots spontaneously came into existence from rotting meat. Physicalism seems to be based on a very similar notion of spontaneous generation of consciousness from dead matter.


Consciousness arising from inert matter is not science***, it’s a kind of secular religion. The idea that consciousness can arise spontaneously out of dead matter is as just as weird, just as far-fetched, just as unscientific, and just as much an article of pure faith as believing that bread becomes the body of Christ.

Consciousness is qualitatively entirely different from mere neuro-biological interactions or information processing. To believe that inert matter can miraculously become conscious is simply not rational. The most complex computer or AI program has exactly as much consciousness as the teaspoon I use to stir my tea. See the philosophical zombie question.

By that standard, life arising from mere inert matter is simply not rational.

Perhaps we are not as conscious as we thought. Perhaps WE are just a very complicated machine that get so close that no one can tell the difference.

But doesn’t that come up against the other half of the Turing test?

You say there are perhaps a few people on earth that could solve it well. You ask me to solve the problem in a way that defies all the other experts.

You apparently grant that I’m intelligent; you chat with me like an equal, you don’t dumb down your vocabulary to do it, you didn’t laugh off the idea of me beating you on Jeopardy or at chess – but let’s also grant, for the sake of argument, that I can’t get you to bestow boundless accolades on me, because I’m not a finance guy who can make you very rich…

…and, at that, I don’t have a medical degree: I understand natural language, and so can chat with you; but I’ve never prescribed an optimal course of treatment after playing expert diagnostician.

But I flatter myself by thinking that, if you assume that I’m human, you still grant that I’m intelligent. Again, of course, not to the point where you’d be looking to bestow “boundless accolades of intelligence” on me; and not to where you’d let me sneer at your accomplishments with a That’s Not Exactly Brain Surgery, because you don’t think I’m a brain surgeon; but just to where you figure I’m mentally at least on a par with – you? Or a grad student? Or a college dropout?

Or whatever. My point is, if I can’t convince you I’m one of the few people on earth who can solve your financial problems well, and I can’t convince you I’m a brilliant doctor, then how do you distinguish a human who interacts with you the way I do from a computer program that interacts with you the way I do?

(Or, at the risk of playing let-me-have-it, do you just throw up your hands and say “Thing is, no, you’re not intelligent, and so a computer program that can do everything you do – well, it wouldn’t be intelligent either, would it?”)

In short, what if the end results achieved by an alleged AI merely equal those of, say, a high-school graduate with an average IQ? Or maybe a college graduate with an above-average IQ – but not a financial expert with an MD, is what I’m saying? Do we say that such a human counts as intelligent, but such a computer program doesn’t?

But consciousness does not arise from inert matter. That is like when the creationists talk about “molecules to man” as if this happens in one step. Consciousness arises from intelligent, yet not conscious, organisms. And we definitely do have an example - us. Unless you think we were specially created in some way.
We don’t know the mechanism that established the ability to monitor our thought processes yet, but it might be relatively simple.
And we have our pre-conscious mind still inside of us. Have you ever solved a problem by getting the answer from your subconscious? Were you able to watch yourself solve it at a deep level? Our subconscious is as intelligent (maybe more in my case) than our conscious mind.

While I agree that “smart” is a marketing term, smart things learn. (Well, not smart phones so much.) You don’t have a smart heater but you can buy a smart thermostat which regulates itself base on your usage patterns. As opposed to a programmable thermostat where you have to tell it your usage pattern, which it will follow even if no one is in the house.

Setting goals and agendas is just one particular narrow aspect of human behavior that we happen to associate with intelligence. There are an infinite number of other manifestations.

Tell me this. What is the difference between intelligent behavior and what you refer to as “simulating intelligent behavior”? Think about it. In a specific problem domain, what can be identified as the fundamental difference, if both the “intelligence” and the “simulator” solve the same problem?

The philosopher John Searle had the same idea, but he was wrong. You might want to familiarize yourself with Searle’s Chinese Room argument, which is supposed to be an argument against strong AI but isn’t.

This is utter nonsense, and I hate to say it, but quite frankly so is just about everything else you’ve written, which not only demonstrates a lack of understanding of AI but seems to embrace some kind of New Age spiritualism or creationist claptrap in regard to the human mind.

It’s the other way around. If consciousness is not an emergent property of a sufficiently developed intelligence, then it must be magic or the direct mystical handiwork of an Almighty Creator. It’s one or the other. Pick one, science or magic. You’ve decided to go with “magic”. I do not, and science does not.

It’s more of a philosophical than a scientific question because it lacks a tangible definition. From a scientific perspective, however, we do have sound and credible theories of human cognition, and among the most credible in contemporary cognitive science are the computational theories of mind.

Zero evidence? Every sentient creature on earth is evidence of it. Because there is no other possible explanation, other than appealing to woo and New Age spiritualism.

What do you imagine goes on in the brain other than neurological activity and computational processes? Intelligence is a continuum, and there are stages of intelligence that do indeed lead to qualitatively different phenomena, but it’s not because their mechanisms are qualitatively different. That’s another fundamental misunderstanding. It’s because sufficient large changes in the magnitude of a system’s complexity yields new emergent properties. A small, simple logic chip will enable a calculator to add and multiply numbers. A few million such chips can make decisions that guide an air traffic control system or can beat the reigning Jeopardy champion on national television.

So your conclusion is that either human consciousness is illusionary or that there is a supernatural or divine influence.

What mechanism allows a conscious brain to override a deterministic universe?

I didn’t quote the entirety of your post #53 because it’s long, but I’m responding to the essence of the whole thing.

You raise some good questions and I think the salient point here is that when I objected to the Turing test as a not necessary nor sufficient condition to establish an AI, I was thinking of it in the terms in which it’s commonly thought of and used, namely as a test of the ability to carry on the kinds of conversations that are typically called “small talk”, like you’d have with your neighbor over the fence. And as I said, these stupid chatbots are neither useful nor for the most part do they represent true AI research, though everything eventually might have some spinoff benefits.

But when you raise the question of conversations like this, you’ve taken the question into a different realm. If a conversation becomes merely a channel of communication for what is really an analytical exercise – a debate, the presentation and solution of problems, and requires clear evidence of reasoning, then it is indeed a valid AI test. Indeed, if such a conversation went on long enough, one might be able to plausibly guesstimate the IQ of the other end, whether man or machine.

My issue was with the very simple-minded interpretation of the Turing test as just being able to carry on a general conversation, which seems to have become its popular incarnation. But the critical factor is that it’s the content that matters, and the demonstration of reasoning skills, in which case the conversation itself per se is almost irrelevant except as a bidirectional bearer of information. That’s what I was kind of getting at with my examples of “show me something useful”.