Could Machines (Ever) Think?

Sub-topic: Strong vs. Weak AI

There are two schools of thought on the subject of Artificial Intelligence.

One says: build a powerful enough computer and it will think and act indistinguishably from a person.

The other says: there ain’t no computer powerful enough, and there never will be.

I used to be a strong AI kind of guy. Computers are getting faster and faster, AI seems to be getting better and better, etc. How else could our brain work except by some kind of massively parallel procedural method? i.e. it could be shown to be equivalent to a Turing machine. Douglas Hofstadter (Goedel Escher Bach) is a strong AI guy and a very interesting writer.

Then I read The Emperor’s New Mind by Roger Penrose. Seems he’s a weak AI guy. His contention was (basically…haven’t finished the book yet) that we don’t know how the mind works but it’s something somehow more powerful than how a computer works; i.e. you can’t simulate it with a Turing machine.

Any thoughts on this issue? It would be nice to have a debate about something not religiously charged for a change.

Well, according to the Bible …

Just kidding. Here are my non-religiously charged thoughts.

I don’t think it will be possible to develop a very human-like computer for a long time. That would require a lot of understanding of the most minute brain functions, and all we to understand it with is our brains. We can understand a pocket calculator, but a pocket calculator can’t understand itself. That is an admittedly flippant way of trying to get the point across.

I do think it is possible to simulate organic life in a computing environment. A computer could be made to have desires, hunches, intuition, etc., if it incorporated neural networking, fuzzy logic, even simulated neurotransmitters and moods. These might not resemble human behaviour very closely, but they might be interesting in their own right. Combine that with the processing power of a modern desktop computer, and you would have a very intelligent electronic life form.

The inevitable pop-science approach to this will be, predicting all the horrible ways our new super-intelligent electro-organism could go insane and kill us. I don’t really put much stock in that. If a computer could be given a life purpose, its purpose might very well be human happiness.

Also, I tend to think an organic-simulating computer might have fewer catastrophic failures, since organisms tend to evolve redundant systems to prevent cardiac failure, for example. Or computer organism might figure out what makes it crash, and then develop better habits which would prevent it from crashing. Just a guess.


Nothing I write about any person or group should be applied to a larger group.

  • Boris Badenov

This is a question that no one will be able to answer, ever. If they never build one then one can always say, “But maybe, eventually…” If they do build one then the question will be, “But is it reall thinking?”

I happen to think that we will eventually make thinking machines. I also believe that they will be nothing like us, at least in terms of raw thought processes. After all, our brains are based on proteins containing C,H,O,N, put together in complicated ways to form cells, forming neurons, forming the complex pathways of the brain. These pathways are developed in response to external stimuli.

In contrast, a computer’s circuits are made of silicon doped with various other light metals. Those pathways are determined in manufacturing, and further flexibility is given through powerful programming. This programming is developed in certain ways through external stimulation.
Looks a light like convergent evolution, don’t it? Of course, that’s because we’re modelling them after us.

Anyway, will they ever think, be sentient, have a mind? That’s hard to say. We don’t fully understand thought, sentience, or the mind yet. Hell, we don’t even understand the brain which gives rise to these.

It might be possible, and the Turing test might be valid, but at the end of the day, I just don’t know for sure. Uncertainty can be frustrating, but at least it’s a certainty.

I personally don’t think machines will ever be able to think. IIRC, Penrose’s main argument centered around Godel’s incompleteness theorem, which says that any formal system of mathematics will have unprovable truths. Computers are formal systems, and Penrose argues that a human mind is always capable of stepping “outside” the formal system and making logical deductions not possible in the system. Of course, that step merely creates a new formal system (as Hofstadter pointed out in GEB), so I guess those arguments really end in kind of a gridlock.

I feel that Penrose is more persuasive, however, because human minds have the ability to understand, not just manipulate, as computers do. It’s not at all clear how that understanding happens, though, and it’s certainly not been demonstrated that computers will NEVER be able to do that; still, it seems to be a pretty powerful argument to me.

Speaking of understanding, I’ve always held this “axiom” of mine that nothing is capable of understanding itself, whether it’s a rock, tree, frog, or a person. I doubt we will ever be able to use the brain to understand the brain–I think that sort of possibility could be a logical contradiction to begin with (just a feeling of mine).

I’ve been thinking about this for a while.

I don’t believe artificial intelligence will ever be achieved. (i.e. even the best “Turing machine” will eventually be tripped up.)
However, I do believe a computer/robot combination will achieve true intelligence.

I believe the key is not thought, but awareness. Sentience is a prerequisite for intelligence.
I further believe that sentience comes about automatically through interaction w/ environment (positive/negative reinforcement) combined w/ a critical level of processing power and memory. From there, the ability to understand cause and effect naturally evolves into intelligence.

So the emphasis should not be on the hard/firm/software itself, but on interfacing that equipment w/ the real world (and creating meaningful punishments and rewards to the system).

Feel free to point out the gaping holes in my ‘logic’.

BTW, Does anyone have any info on the robot “COG”? I half-remember that it supports my thesis, but cannot even remember who built it. MIT?

IIRC, Penrose proposes quantum effects to explain this. But AFAIK, pretty much everybody else thinks he’s out to lunch on that one. I don’t think most neurologists believe anything going on at that level has any real influence on thought. 'Course, they might all be wrong. But I’d place my bets with them instead of with Penrose on that one.

My own opinion is that there is nothing fundamental that would prevent a suitably complex computer from thinking like a human being. But I think the problem is so complex that for quite a while, it’ll be a practical, if not a theoretical, impossibility.

It kinda raises some interesting ethical questions though, does it not? Like, if you had a computer that really was conscious, could you unplug it without committing murder?


peas on earth

Maybe they already do.

Aritifical intelligence. Hell, we can’t even make a computer give us a genuine random number. Determining if a loop is finite or not from a program is impossible. Right now I believe artificial intelligence is based on fuzzy logic (In stead of if else statements the have a larger range of variables). I think this is cheating. Using fuzzy logic will make it “look” like the computer is thinking, we can even make it look like it’s learning by having it mimic us, but if it will ever happen we arn’t even close now.
If we want a computer that is intelligent, I’d like to see a programmer design a computer that will forget! What could be more human!
Maybe one that won’t start in the morning until it’s had 8 hours of off time.

I could have this wrong, but it seems that when people talk about the ability of a computer to make human-like choices, they assume that humans themselves always make the logical choice.

Many humans make bad decisions everyday. even more just make a decision and let chance take its course. sometimes it works out , sometimes it doesn’t.

could a computer knowingly make a bad or inefficient decision?


All this science, I don’t understand. It’s just my job 5 days a week-- Rocketman

Occam said:

And this would be different from human intelligence how? As often as I’ve seen ‘magic’ and psychology tricks ‘predict’ a number a person chose at ‘random’, I would doubt a human’s ability to generate a sequence of random numbers that were any more random than a computers. That being said, I’m not real sure what relevance that has to artificial intelligence.

As most of you know I am a game designer by trade, but I also do the primary AI programming for my company (I have had a deep interest in AI for a long time).

Somebody asked if you could program a computer to make an incorrect, incomplete or inefficient decision. The answer is yes. It is in fact very simple.

Imagine the following:

The computer knows about 100 facts and has to make a decision to go left, right or straight based on those 100 facts.

Scenario #1) Assume a parallel processing environment. I can set the computer to go and process all 100 facts in parallel and tell it when you have 66 of the facts back make your decision based on whichever decision is most supported. In the case of a tie the next fact to return is the tiebreaker.

Scenario #2) Assume a non-parallel processing environment. Begin processing the facts. After processing pick a number. If the number if below the numbers of facts processed make your choice based on the facts that have returned.

Now, is this intelligence? Not really, at least in my view, but it should show how you can really manipulate how a computer makes decisions.

Will computer ever think? That depends really on what you definition of think really is. There are numerous problems with getting a computer to think and they all revolve around understanding what human thinking is. I firmly believe that if we can understand how we think we can certainly make a computer do the same.


What more could you expect from somebody who lets people kick him to the head?

It seems to me that the biggest problem that will be faced by people trying to make computers really think will be identification.

How do we identify things?

What is a chair? I could show a human a multitude of chairs and it would readily recognize most of them even if they were very bizarre looking. However, how do you define that to a computer? More importantly it is vital for the computer to be able to define it’s own definitions which would be quite a trick too.

For example, I need to be able to put a pen in front of the computer and say “pen”. It would need to analyze the object in front of it, pick up key features and store pen as “these list of features”. When confronted with an object it doens’t recognize and it is then identified as a pen it would look for features of this pen that were different and add them as potential extra identifying features.

However, eventually the identifying features become VERY subtle. What is the difference between a pencil and a pen that looks like a pencil BUT has a metal tip instead of a graphite tip? Very very subtle, especially if the metal tip looks like graphite. So, then it is identified by what using it is like. Ahh, when it is used it puts ink down and not graphite markings. Etc etc etc.

It is pretty complex stuff, and the world is a big place.

Plus, I kind of skipped past it but there is the problem of having the computer not recognize something and know enough to ask about it. The computer may pick up a pen that looks like a pencil and based on what it knows say “This is a pencil” and never realize to ask. This is fine for a pen->pencil confusion but for more important issues it is critical the computer know to ask.

Very complex stuff.

posted 10-29-1999 02:18 PM

As most of you know I am a game designer by trade, but I also do the primary AI programming for my company (I have had a deep interest in AI for a long time).
Somebody asked if you could program a computer to make an incorrect, incomplete or inefficient decision. The answer is yes. It is in fact very simple.


I guess what I was asking is if a computer had two options. and it felt one was more correct, would it ever use any rationalization and pick the less right choice. humans do it all the time.

if you already answered that, i’m sorry, i’ve only read your answer a couple of times and i’m not that good at this yet.
but thanks.


All this science, I don’t understand. It’s just my job 5 days a week-- Rocketman

I hate to say this but … it depends.

You see, it depends on why you believe humans do that.

  1. If for example, you believe humans do that because they haven’t evaluated all the criteria (facts) then the example above is a perfect example of how to get a computer to do that (Scenario #1 in particular).

  2. If for example, you believe humans pick the wrong choice at random this would be easy too simply have the computer make the decisions and then randomly decide whether to take it or not.

  3. " " ", " " " " " " because they misweigh the various facts (example, I am thirsty, but the only place to drink is a VERY rough bar. How much weight do I put on the fact that I am thirsty vs. it is a VERY rough bar?). In the previous example I said all facts were of equal importance to the decision. This is rarely true, and certainly in most AI it isn’t true.

Now in typical AI the importance of a fact is pre-weighed. However, in a learning AI it will reweigh the importance of facts based on the outcome.

Example, tic-tac-toe. At first you assume every square has equal value. However, after you lose a game you devalue the squares you picked. When you win you increase the value of the squares you picked. If you write a program to do this you will see the centre square value get very high since it is critical to winning at tic-tac-toe.

Now, to create an AI that would mimic a human’s ability to misweigh the importance of a fact you would need … drum roll please … an AI to weigh the importance of the facts. If is a fuzzy AI itself (like the one listed in the Scenario #1 a few posts ago) you would have a situation like you describe (just like in 1 and 2 of this post).

Which brings me to an important point. Computers are modelling tools, which is why understanding how humans think is fundamentally vital to building the model for computer thought.

Metroshane’s question illustrates this beautifully. Why do human’s make wrong decisions? Above are three (there are undoubtedly more) models of why. Any one of them could be desribed as human intelligence IF that is the same model for why humans make the wrong choice.

P.s. - Personally I subscribe to the 3rd choice. I think humans make poor decisions because of misweighing facts. Why do we misweight facts? Don’t know.

Good. Thanks, but consider this.

The plane isn’t invented yet. a computer is wieghing it’s option whether it should try and fly or not. My understanding is that it would think staying on the ground is better.


All this science, I don’t understand. It’s just my job 5 days a week-- Rocketman

metroshane said:

Why must artificial intelligence necessarily mean better than human, or equal to the more intelligent and creative of humanity? This comment seems a little like Occam’s that I commented on. Before the plane was invented, and human’s were trying to decide whether or not to fly, most humans probably did decide that staying on the ground was better.

I think first we should try to get AI to approximate the ‘average’ human to consider it true AI, then maybe we can try to develop it to the point where it might be able to generate a random number better or come up with great ideas like coming up with inventions like the airplane that change the way we do things.

Is it time to bring up transistor-based Neural Nets yet?

Tracer, you go ahead. I know next to nothing about neural networks, but what I know suggests that they are vital to AI.

What I was going to discuss was the amount of time people put in to learning things. We spend, literally, years learning to communicate and recognize objects. Only until maybe 3 years have passed can we accurately express desires. We spend our whole lives building to more correct observations.

There was a computer given an arm and a design that was told to build a tower out of blocks. It started and placed each block in the correct position. But it decided to start at the top and work it’s way down.

A two year old probably knows better than to build a tower from the top down.

So I believe a group of programmers started on an entirely different tack. They took a computer and fed it knowledge. All the knowledge they could. Then they gave it fuzzy logic. The example they then gave was asking “Who is Melville?” The computer responded “I assume you mean Herman Melville, author of Moby Dick.” Wow, it’s not too tough to design a program to access data, but what would it do if you ask it, “Will I fall if I jump off of a building?”

I think they’re taking the right approach. Give a computer all the information you can (basic, basic information. Gravity and such) and then slowly build it’s knowledge not through giving information, but through guidance. Like schooling. Couldn’t a correctly designed Neural Net do that?

Not necessarily, even the relatively simple AIs could make the decision to invent the airplane, but it couldn’t actually invent it.

For example, give a computer all the necessary physics behind airplanes and other modes of transportation and it (with the correct programming) correctly find that air travel is fast yet expensive (in terms of fuel) relative to lets say rail travel. Give it more facts about human demands for high speed transportation and it would correctly deduce that airplanes would come in handy!

However could it actually invent the airplane? Probably not. An AI could invent the airplane today because we would know the kinds of rules to put in place to have it solve to invent it, but without having invented the airplane ourselves first we wouldn’t know what to tell the computer to go and solve.

In essence I have already described neural nets (transistor based simply being a hardware based approach instead of a software based approach).

A neural net has a series of nodes that have 1 or more inputs from other nodes and 1 or more outputs to other nodes. The connections between these nodes are weighted. Each node usually represent a question, and each link a particular answer to that question.

It makes more sense when you see a picture of it, but if you think about the weighted decision maker I talked about above you can see how it fits in with a neural net (I hope).

Transistors are an ideal hardware choice for neural nets because they can be made to fire like neural nets (when a certain strength of signal comes in it fires, when the signal of the strength drops below that level it turns back off). Of course, a biological neuron would be even better! If we can ever build a biological computer certainly we could expect a huge rise in the level of AI we could build.