Artificial Intelligence question.

I hear every so often that as computers are becoming faster and faster sooner or later we will hit the power of a human brain. Somewhere around where we get to quantom computers I beleive. I have read that at this point we will be able to create Artifical Intelligence and eventually even run the entire universe in a simulation. My question is regardless of how powerfull computers are wouldn’t the software disallow this? Correct me if im wrong but all computers know are If, Then, Else right? Even if the software is complex its still if then else. Could we ever create a true artifical intellegence?

We’ve discussed this before: http://boards.straightdope.com/sdmb/showthread.php?t=321276&highlight=intelligent+computers

Intelligence is probably not directly related to “processing power.” In other words, if we harnessed the processing power of all the computers in the world, the resulting machine would still not be intelligent.

Computers are good at solving quantifiable problems - at some point, I suspect computers will be better than humans at doing everything that is quantifiable - playing chess (obviously), performing medicine (not so obvious), driving a car (getting there…). Still, doing these things well doesn’t make a computer intelligent.

It depends on what you mean by “intelligence.”

Take a look at the Turing Test article at Wikipedia for some good discussion of this concept.

Neurons don’t even understand else, but that doesn’t seem to hurt us.

This is logically impossible if you think about it. Any computer we build is part of the universe. So if it’s simulating the universe, it has to simulate itself as well. Accurately simulating itself would require the full processing power of the computer, leaving nothing left over to simulate the rest of the universe.

At least the latter part of the sentence will not become true, since it involves something which is logically impossible. If we built a computer to simulate the entire universe, this computer would have to simulate itself, since it is part of the universe which it is simulating. However powerful the computer is, it will never fulfil the task of fully simulating itself, plus more events which are taking place outside the computer.

This reminds me of a short story by (I think) Borges about a map half as large as the country mapped. This map would have to include itself, and so on, ad infinitum. I don’t remind the title of the story right now, however.

Ah, I was beaten by a few minutes with my post.

Yes, but only because Pochacco is running a simulation of the universe at slightly faster than real-time; he read your simulated reply and reposted it.

I’m actually running a simulation of Pochacco’s simulated universe and reposting his replies before he gets to them.

Ah, dual cores…

I agree with this. As far as anyone knows, the output of a human brain is the result of a huge number of very simple elements. Just as the output of a computer program is the result of a large number of simple instructions.

When you say if-then-else, I take you to mean that computer programs only follow a number of fixed rules, and can’t think for themselves.

In fact, there are ways to solve a problem without a fixed set of rules, hings like neural nets and evolutionary algorithms.

The idea is that you simulate a brain, initially the “neurons” are connected to each other in a random way. Then see if it can perform a simple task. At first you can expect a 99.9% failure rate. But some only have a 99.5% failure rate, and the more successful ones are allowed to reproduce and mutate. The success rate gradually increases over many generations, until it is nearly perfect. And the thing is, the brain circuit that evolves does not follow if-then-else logic. We can’t examine it to see the logic it follows to produce a right answer. It just works.

At the moment, this method can produce brains that are about as clever as an ant. But with developing technology, a similar method might produce human-level intelligence one day. Not anytime soon though.

Why would the computer have to simulate itself? By simulating itself, it’s actually being itself, which it already is.

Because it’s trying to simulate the universe at the same time. That simulation has to include the influence of the computer on the rest of the universe. And the only was to know what that influence is, is for the computer to simulate itself as part of the model.

It isn’t at all clear that a computer possessing intelligence is possible. See e.g. Searle’s Chinese Room argument for one possible objection.

You possibly know this, but connectionist networks and evolutionary algorithms are the poster boys of AI, yet the bane of every expert in machine learning.

Neural networks are nothing but fancy regression techniques that happen to be incredibly hard to train effectively, and evolutionary algorithms are basically hill climbers suited to a small class of problems.

The realization of AI is a long way off. Nobody worth talking about is working on making machines “think”. Academic AI is better defined as the science that aims to find clever solutions to hard (yet well defined) computational problems, rather than the science of creating a “thinking” machine.

You have a point, but not because of if then else. Increased processing power does not lead automatically to AI, despite many sf stories that have this as a premise. I remember when the 386 came out, and USA Today said AI was just around the corner because of all that power.

When I took AI, 35 years ago, there was hope that solving various “intelligent” problems would somehow create an intelligent machine. Some of the problems discussed then were giving directions between two places, really good chess, language translation, and solving complicated mathematical problems. The ones where heuristics can be employed (chess, directions and math) have been pretty well solved. The ones where a true understanding is required (translation) haven’t.

As far as I know there isn’t even a really good theory of intelligence yet. I haven’t seen a lot of progress on the fundamental problems in all this time.

As mentioned, code can be self modifying. In act, the first paper I know of about evolving code came from IBM in 1959.

Don’t worry about simulating the universe. I bet that the first AI will be from simulating a carefully mapped brain, not software from scratch. If we can get this up, we should be able to instrument the simulation to figure out what the heck is going on. I believe I’ve read of simulation of worm brains or something of that size, so we’ll have the capacity to do it before too long. Whether we can map the neurons well enough is something I don’t know.

Exactly write. I’m reviewing a book on data mining, and it has a chapter on using neural networks for it. Data mining is a type of regression problem.

As for GAs, pretty much all the ones I’ve seen can be beaten by algorithms once the problem is understood. They are a way of finding a solution in some sort of search space, and not that different from other techniques such as simulated annealing. As far as I can tell they have even less to do with real AI than neural nets.

Yes. We need to walk, before we can run. Literally.

Fundamentally, the brain is an organ for predicting the future. It takes sensory inputs, uses them to construct a simple evolving model of the world, and then uses that model to determine which muscles to twitch.

“Intelligence” as we usually think of it, is the result of a very high-order simulation of primate social interaction. We have to start off small with relatively simple problems – walking for example. Walking requires a fairly sophisticated internal physics model to accomplish. You take inputs like joint position, mass, balance, and a visual model of the surrounding terrain, and use it to predict which leg movements will result in a successful step forward.

It’s likely that if we ever do achieve true AI it will be the result of building a model of cognition up slowly, step by step, starting with the most basic brain functions and working our way upwards. We won’t get there with one big dramatic leap.

How about getting it to simulate the entire universe, minus the universe-simulating parts of the universe?

Woah - owd ya 'orses! :slight_smile:

Automated reasoning has had some limited success (e.g. proving all Robbin’s algebras are Boolean, a proof that eluded Tarski, for one), but it’s certainly nowhere near a solved problem.

And with that final recursive loop, Captain Kirk again sends another computer into logical oblivion, assuring once again that the meaning of Life, the Universe, and Everything is safely undiscovered.

Small handheld calculators are able to solve complex arithmetic and symbolic algebra problems in seconds that would take me hours and many pieces of paper to solve, but the most sophisticated purpose-dedicated computer vision systems are no match for a two year old in basic conceptualization of their environment based upon visual cues. Intelligence (in the human cognition sense) has less to do with processing speed or capacity than in integration and synthesis of sparse data. Coping with sparse data and incomplete instruction–effectively, filling in the gaps where a computer doesn’t have complete code or input–is something that digital computers are utterly lousy at, and can only do so given a set of higher level explicit instructions.

Stranger