Artificial Intelligence question.

Sure, you just need to:

  1. Build a nigh-infinite computer with nigh-infinite speed (probably violating relativity’s limits on how fast you can send signals)
  2. Solve all of physics.
  3. Measure exactly where and what state every particle in the universe is in (this guy Heisenberg thinks you can’t).
  4. Say hi to Opal
  5. And make sure absolutely nothing from the computer simulation affects the rest of the universe (including things like variations in how much electricity is consumed, or extra heat is given off).
  6. Profit! Oh, no wait, scratch that part (see #5).

Sure, try pushing that in a sales meeting.

The Chinese Room objection is crap, because it implictly disallows the Chinese Room the ability to remember anything. Of course a table of lookups for every possible interaction isn’t going to be intelligent, because as soon as the guy on the other side of the Turing test starts talking about the conversation the Chinese Room will fail. In other words, if you ask “What did I say about that five minutes ago?” the Chinese Room won’t be able to answer the question unless it is allowed to store information and come up with new output.

Or, if we allow the Chinese Room to remember and learn, yet still insist that the whole process cannot “really” be conscious because none of the individual components know what they’re doing, then it’s just another way of stating that there is no such thing as consciousness.

Are the individual neurons in the human brain conscious? Are various sections of the human brain conscious…if I took out a cubic centimeter of brain and kept it alive in a saline solution, would that cubic centimeter be conscious? The individual parts of a human brain aren’t conscious, yet we assert that a human brain IS conscious. To assert that a machine that can store information and learn and converse in natural language couldn’t be conscious simply because it was made of paper and carts and messengers moving scrolls from one bin to another is ludicrious.

There isn’t any magic happening in the human brain, the human brain is composed of ordinary matter obeying the ordinary laws of physics. Anyone who says otherwise is selling something.

The universe-simulating parts still interact with the non-universe-simulating parts. They exert gravitational pull, consume energy, output heat, etc. So you can’t just cut them out of the simulation.

When I was in school they were working on integration. By math I meant math problems, not proofs. I know that is nowhere near solved.

Searle’s argument is false on logical grounds: it’s implementation-agnostic. If it’s true, then we’re not conscious, either, which is a pretty hard philosophic position to defend.

More believable arguments have been made (Roger Penrose, for example) that purely digital intelligence is impossible: basically that the brain must be a quantum computer – although again, his* The Emperor’s New Mind* is a book being written outside of his field of expertise (physics) , and has some pretty basic computer science misunderstandings in it. But if we’re including quantum computers as near-future technology (it’s not clear that usable ones are), then these arguments don’t matter, anyway.

There is an unproven assertion in Computer Science that certain types of computer systems are universal, that is: they are capable of computing anything that can be computed by any digital computer at all (given a correct program, which is always the rub). The number of instructions necessary for universality is very small, no more than three or four, depending on what you consider an instruction. All desktop computers today, for example, are easily universal, assuming you can add arbitrary amounts of storage to them. Church’s Thesis (the assertion that universal computers can compute everything computable) isn’t proven, but it’s almost universally believed.

Artificial life systems (virtual environments where virtual things are “evolved” by competition with each other for virtual resources) almost immediately show astoundingly lifelike (though not intelligent) behavior – these sorts of things aren’t hard to build at all, they’re maybe student project level.

It’s hard to tell how much size (number of processors/transitors/memory) matters: Biological brains are full of redundancy based on the messy unreliable nature of cells; a digital brain using reliable components might be able to perform the same tasks with many, many fewer “pieces.”

So I’ll make a bold (and equally unproven) claim: I suspect that the biggest of modern computers today are capable of human-class intelligence, if we only knew the program. As to how we’ll get there: I disagree that we’ll model something we know. I suspect we’ll just take the artifical life path and simulated evolution to greater and greater depth and realism, and we’ll just suddenly have intelligence one day. We probably won’t be able to explain the details of that intelligence any more than our own, but it might be easier to study.

The other problem is recognizing intelligence. Human “intelligent” behaviour is centered around reproduction, acquiring food, acquiring shelter, lengthening life, and a host of other biological imperatives that have basically squat to do with the needs of computers or robots. Given that their intelligence is aimed at an entirely different set of basic needs, expecting computers to have intelligence that’s the same as humans’ might be unrealistic (hence the arguments against the Turing Test). But to get something with equal ability to reason and create? I don’t see why that isn’t possible sooner or later – I just don’t know how to predict when.

There are various methods for visualizing the encodings in a neural network. Cluster plots, for example. Walk through our AX Tutorial for a demo.

Let me help you rephrase that:

For a consciousness-raising experience, try Computational Explorations in Cognitive Neuroscience.

Cheers,

I think everyone ought to re-read Larry Niven’s *The Schumann Computer * before trying to create an intelligent computer…

But all animals have tons of neurons, and only we have intelligence. (Or maybe us, the chimps, the dolphins and the white mice.) Neurons by themselves are just gates - neural networks are connections of real or virtual ones in a computer. But it is not at all clear they would ever lead to intelligence.

Why would you make a distinction between our mental processes and other animals?

Isn’t it just a continuum of capability with respect to specific attributes?

It’s fairly clear to me right now that they do in fact lead to intelligence, unless you believe that a) my brain is not a neural network and b) I am not intelligent. The ability to think at a high level is just an abstraction over thinking at lower levels. It takes complicated prefrontal/basal ganglia hardware to implement it, sure, but evolution has had plenty of time to sort out the details and its all based on a common set of learning principles. We’ve been doing paired risk association since we were flatworms, a pretty intelligent thing to do if you want to survive and are sampling new environments with your fancy new locomotion tricks. The behavioral development of intelligence is extremely well documented and we’re working out the details of the biology at the moment, benefiting greatly from the synergy of neuroimaging and modelling techniques. If one simply considers the rate of progress in modelling nervous systems, there is no reason to believe that we won’t be able to within most of our lifetimes. If you’re not aware of this progress, read Dharmedra Modha’s “How to create a cortical simulator”. Modha works for IBM and they are betting big on modelling the brain, throwing the world’s largest supercomputer at the problem, whatever that happens to be at the time. Also read O’Reilly’s recent “Biologically Based Computational Models of High-Level Cognition.” I’ve taken courses in artificial intelligence, machine learning, data mining and natural language processing. Biologically inspired computing is never mentioned, so i’m not surprised that the posters in this thread haven’t elaborated on what state of the art neural networks models are capable of and what the future holds.

In the interests of pedantry, you’re describing a contraction of Turing’s thesis and Thesis M, not Church’s thesis. See here, for instance.

Artificial neural networks are nothing like real brains. The closer to real brains we make them, the worse they perform.

They tried that once. The answer it gave when it finished it’s first set of calculations was 42. It then went on to design the computer which would be capable of explaining why the answer was 42. Unfortunately, that computer was destroyed shortly before it was to give the answer.

Can you elaborate? (No, this isn’t Eliza)

Actually, I was aiming for what they call the Church-Turing thesis these days, after showing that the two are basically equivalent. Wiki’s Page isn’t too bad on the subject. But you conveniently gave me a chance to add something I neglected in my over-long response before:

A key difference between biological and modern computational structures is parallelism: the brain is massively parallel by any computational standards (each neural path is effectively able to fire whenever it needs to), whereas digital computers are very much less so, even parallel ones. This is part of the issue with neural nets: they’re just not very efficient to implement (basically a set of large matrix multiplications) on serial processors. Neural nets are a simplification and regularization of biological “networks of neurons,” and the messy biological ones have certain advantages.

I’d like to hear this too. I hope it’s good, because I’ve got a full arsenal of evidence indicating he’s waving his hands.

Yeah. I made two claims - one, that ANN’s are nothing like real brains, and two, the closer they are made to real brains, the worse they perform.

The first point is obvious. ANN’s are basically what statisticians would recognise as non-linear adaptive basis function models. Each neuron calculates a weighted linear combination of its inputs, applies a function (typically a sigmoid), then passes it on to the next layer. But real brains have multiple types of neuron, and neurotransmitters and hormones do not behave in the same way as the simple value passing between layers present in ANN’s, etc.

Further, it’s not certain that the brain contains the backwards connections that are necessary for ANN training, via backwards propagation.

For the second claim - well, this was something my machine learning lecturer told the class back when I was an undergraduate. I’ll try searching through my course notes, seeing whether they mention it (actually, I might e-mail him).

What is your link supposed to prove?

OK, I e-mailed him. Let’s see if I remember correctly.