Any reason we couldn't build a machine that "understands" everything?

Stephen Hawking, amongst others, makes the case that the laws of the universe may turn out to be so complix or, well, strage, that we would have no chance of understanding them.

This seems intuitively possible to me, on the same basis my dog doesn’t understand relativity.

However, is there any theoretical reason we could not build a machine that “understands” the laws of the universe?

Say we built one, how would we know?


There already is such a machine… it’s called “The Universe”.

For a serious answer - because even the limited, human level of understanding requires a number of connections and a processing speed several orders of magnitude above what we can currently build.

“Understanding” is just data compression. It’s way of describing a system – any system – that is simpler than the system itself but still predicts how the system will behave.

So it’s impossible to “understand everything” because no matter how accurately any model predicts the behavior of the thing its modeling, it still leaves something out.

Connections, perhaps, but the “processing speed,” as it were, of the human brain is actually pretty slow. Your neurons have to go to the trouble of manufacturing and releasing chemical neurotransmitters to send pulses flying across dendrites to get things done. Comparatively, the electrons whizzing around the tiny, tiny spaces of a microprocessor are millions of times faster.

Also, there’s ‘understanding’ and there’s understanding.

For example, we know all the rules of chess yet I don’t think we understand the game in the sense of being to predict, for example, if there’s always a forced win for white (or maybe even black ;)).

Likewise, the rules of the game of Life are exceptionally easy to list. Yet, I know we don’t understand it.

So, if by understanding something, say nuclear physics, just because we know, or may one day know the ‘rules’, i.e. the laws of quantum physics including even quantum gravity, does not mean we’ll understand it.

The chicken and egg problem. First we’d have to build a machine that could tell us how to build such a machine.

Then we have to wait for Magrathea to come out of dormancy. Given the current economic situation, that might not be for quite a while.

What does it mean to understand something?

No-one has yet demonstrated, to any general satisfaction, that a machine can understand anything. Once someone creates a machine that can pass a moderately rigorous Turing test, then maybe this will be a question worth asking.

No it isn’t. If this were the case, my digital camera would be understanding a visual image every time it compressed the image data into a jpeg file. A data compression algorithm run on the text of Moby Dick does not in any sense or degree understand Moby Dick.

Indeed, understanding is not a “way of describing” at all. Sometimes we can demonstrate that we have understanding of something by describing it in certain ways, but the understanding is not the describing.

A human being can pass a turing test, and the human brain is a machine (note that “machine” is not the same thing as “computer”).

Yes, I know by machine you probably meant a synthetic machine but the point is, we know, a priori, that it’s possible to make a machine that understands, since if all else fails we can simply (!) copy the human brain.

(And for the purpose of the OP, it doesn’t matter whether it will take a million years for this to be a practical possibility.)

None that we know of, although, as always, it depends somewhat on definitions.

For instance, The Hamster King is right that it’s impossible to understand everything with 100% accuracy as a model is always a simplification.
But many would not consider 100% accuracy necessary for understanding. The ability to make good predictions, and an appreciation of why the predictions cannot be 100% would be considered understanding, at least to us in the present day.

Cite? (With actual evidence rather than mere confident assertion.)

Look, I do not really disagree with the view that the brain is, in a certain sense, a machine, but that is a working hypothesis, not an established fact. To use it as the premise of an argument to the effect that it must be possible to make a machine that can understand is to beg all the important questions. Indeed, one of the main motivations for AI research, of trying to build machines that can understand, has been to establish the possibility that that the human mind might be the product of a machine. It has not yet succeeded in that aim (although it has perhaps made it look a bit more plausible than it did before).

Furthermore, I would also question the implicit premise that brains can understand anything. People understand stuff, and, indeed, people are the only entities we know of that are indisputably capable of understanding anything. (Maybe some animals can too, but there is room for doubt.) People’s understanding certainly involves their brains in an essential and extensive way, but it does not follow that the brains themselves, considered in isolation, understand anything. Indeed, many AI researchers have now reached the conclusion that they do not, and that if we we are to achieve actual artificial intelligence (with understanding) it will be done by building robots, capable of rich, reciprocal interaction with their environment, rather than merely by programming computers.

There is huge fun to be had here. Well trod ground.

The issue of machine conscionsness and understanding will probably never be fully resolved. Whilst many (including me) subscribe to the brain as machine view, many do not, (Roger Penrose for a rather famous example) and there is really not a great deal we can do to refute or prove the viewpoints. The Turing test is all well and good, but doesn’t help us all that much. It has been observed that a lot of real people would fail the Turing test. :slight_smile: It doesn’t prove that a machine thinks, just that we can’t tell that it doesn’t.

Another touchstone about understanding. My cat knows that he can come and haras me for food, and he will get fed. But does he know that he knows? Our machine should probably pass this criteria. Something we currently have no idea how to test.

The OP asks something that can include some deeper ideas. For instance, we humans seem to be very much bound into our 3D world. (Chronos perhaps excepted a little.) So, assuming we could actually build a conscious, thinking, understanding, machine, why should we not design one that could think intrinsically in arbitary dimensions? A being for which the 10 (or whatever it is this week) dimensions of possible reality are no stranger than for us to draw on a sheet of paper. Design a being for which the intrinsic logic of space and time are intrisically probablistic, and for which violations of common sense are just ordinary fare? From our current understanding of the possible nature of the universe, this being may be said to understand the nature of “everything”.

Of course that is the usual limited view that physics is all that matters. “Understanding the mind of God” being limited to the laws of physics. Something that is perhaps shooting just a little low.

Usually when we talk about “understanding” a work of fiction we’re talking about understanding the experience it produces in a reader. Moby Dick, considered apart from a reader, is merely a string of symbols, so the compression algorithm “understands” it as well as it can be understood in isolation. It’s only when we expand the system to include the book AND the reader (who brings with him a vast body of cultural knowledge that the book plays off) that the story of man pitting himself against God emerges.

When you “understand” a book you possess a concise model of the experience it will generate for a typical reader from a particular social context. Your compression algorithm is unable to “understand” Moby Dick the way we normally understand books, because it doesn’t have access to the entire system. It doesn’t know anything about whales, or God, or human beings.

Yes. We don’t know how. Is that a good enough reason?

Because if we created such a machine, this would happen:

Or perhaps this:

Well, one definition of machine is simply

Clearly the human brain is a machine under this definition.
Although in fairness, a more likely definition intended on this thread is A system or device, such as a computer, that performs or assists in the performance of a human task (same source), which again would include the human brain.

Again, perhaps you are confusing machine with computer?

I disagree. This is not really a goal of AI. For one thing, making a conscious program, say, would not demonstrate that the human brain functions the same way.

Erm, OK, brains don’t understand, people do.
Again though this would come down to definitions. Most definitions of understanding trivially give the human brain this property.

True enough, the brain probably does need rich interaction with the world to understand anything, but that doesn’t take the property of understanding away from the brain.
If your brain were disconnected from all sensory input, would you immediately lose all understanding of the world?

What justification is there for calling that understanding at all?

I entirely agree. What I do not see is why you think compression algorithms have anything at all to do with understanding (in anything like the normal sense of the word).

Assuming a computer can be called a machine, can anybody cite some process used by the brain or the rest of the human body that cannot be replicated or simulated with a computer, or other type of machine? AI researchers have made numerous assertions and implications about the difference between human brain functions like thought and understanding being different from things done by computers, without ever demonstrating a single process or mechanism that would do so. Unless the brain, or some other part of a human being has a supernatural ability not limited by the ‘laws of physics’, brains are machines, which could in theory be replicated by other machines.