Any reason we couldn't build a machine that "understands" everything?

I guess the whole thing about whether a machine could enjoy chocolate is a tangent, as it hasn’t been demonstrated that subjective awareness is necessary for understanding (unless of course, by understanding everything we mean understand all subjective states too).

But anyway my last post was not well phrased. It implied that the mind may be epiphenomenal, something that I certainly wouldn’t argue (though others might).
What I simply meant was that simulating the mind may not be the same thing as creating a mind. e.g. I create a program that simulates your will to avoid self-harm. Such a program may behave just like you (for argument’s sake), without having any negative pain “experience”.

As for your conclusion, even if the mind were epiphenomenal, I don’t see how that necessarily means it does not exist.

Well, that is still not a sentence, but, as far as I can make a reasonable guess at what it is intended to mean, I cannot see how it differs from “If I am right, everyone who disagrees with me is wrong.” That is true, of course, but it is not much of an argument.

No, it is your failure to distinguish between what is asserted as fact and what is asserted as possibility that is faulty. Most of the claims that you accuse me of asserting as true, I was in fact very careful to say were mere possibilities (which, in some cases, I said I did not think very likely to be actually the case).

[quote=“ed malin, post:51, topic:540960”]

That last paragraph is full of opinion and lacking facts.[\QUOTE]
I think there are some of each, and I indicated which were which. Can you tell which are which?

[quote=“ed malin, post:51, topic:540960”]

The paradox arrives, like all others, from faulty premises.[\QUOTE]
That is what I said. (But I expect you are talking about a different paradox, that probably isn’t one. :()

[quote=“ed malin, post:51, topic:540960”]

Show some evidence somewhere of the necessity for a rich interaction with the environment as a necessary part of the process of understanding,[\QUOTE]

Well, I could give you a reading list (but the books might have hard words in them, like philosophers and scientists use, and you might need to be able to distinguish assertions of fact from assertions of possibility), or I could point out yet again that it is a fact that the only beings we know to be able to understand anything are human beings, and human beings do engage in such rich interaction. I know that is not a proof, and I don’t claim to have a proof, but, then, I have not asserted the need for rich interaction as a fact. I propose it as a hypothesis, which in my judgment, there are strong (not decisive) reasons to believe may be true and scientifically fruitful to pursue. I have given one of those reasons - the one that can be stated succinctly - if you want the rest you will need the reading list (and perhaps a course or two in philosophy and cognitive science to prepare you).

I do not believe, and have nowhere said, that a machine cannot have a rich interaction with the environment.

All that is true. Why would you imagine that i think otherwise? I never said or implied that any of those things are false, and no parts of my arguments have depended upon them.

Well, perhaps you are not interested in the scientific study of the mind. I am. (Incidentally, I think we are in agreement that “understanding” should not be defined such that it could only be accomplished by a human. I suspect that you think that I think it should be defined that way, after all, you have got most of my other views backwards.)

That’s wonderful. I would be very interested to see it, and you Nobel will be in the bag now. Wow, it would be in the bag if you could even write a program that achieves your level of understanding of the subject!

No, probably not my best.

Anyway, I never said it wasn’t feasible (my guess, but it can only be an educated guess, is that it is feasible). What I said was that nobody (including you) can know that it actually is feasible before it has actually been done (and, I will add, at a time when we have very little understanding of the problems that might be involved in doing it). Do you see the difference?

Again, I said nothing of the sort. I expect that you, and even people very much stupider than you, are quite capable of enjoying chocolate. What I did say was that it would take a brilliant insight into the nature of what it is to enjoy chocolate, in order for it to be possible (regardless of the work put in) to build (and/or program) a machine capable of enjoying chocolate in the near future. Again, do you see the difference? (It was a long complicated sentence, I know. Take your time.)

I suppose it is possible that a long series of fairly trivial insights might get us there over a longer period of time, but I was paying you the compliment of suggesting that you yourself might be capable of having the sort of brilliant insight that would get us there relatively quickly. There is no sign that you have had it yet, though.

I did not call you a liar, and I did not claim that there is proof of supernatural forces. I said that there is evidence for them, which is not the same thing. (I think I also made clear that clear that I am not actually inclined to believe in the reality of supernatural forces. Despite the evidence in their favor, which is extensive but shaky, there are also very good reasons to think that they are not possible.)

I also said that if anyone were to claim that there is not any evidence for supernatural forces, i.e., that no-one has ever claimed to see a ghost, or witness a miracle, or has reported an experiment that they interpret as the demonstration of psychic powers, then that would be a lie. (Perhaps I should have qualified that by saying that it might, alternatively, be evidence of a deep ignorance of the topic.)

Also, I said that if anyone claimed to know for sure that it is possible to construct an artificial machine capable of enjoying chocolate, then that would be a lie (although, I dare say, more likely a product of hubris rather than mendacity).

Do you wish to assert either of those claims that I have have asserted to be false in the previous two paragraphs. Before you commit yourself, I suggest you read them again, with particular attention to words like any, not, if, possible, and for sure, and noticing the absence of words like proof. I do not think that, once you have made a successful effort to understand those claims, you will want to maintain that they are true.

I’m not sure that the experience of pain is separable from the symbolic processing that gives rise to the behavior of avoiding pain. The way I see it, the pain is just an aspect of the process. If the simulation of me behaves in the same way I would, over all inputs, then I have as good a reason to suppose that it experiences such things as pain as I have to suppose that I myself experience those things. All experience, in other words, is just an aspect of the behavior of a physical system.

Subjective awareness does not really exist – we just think it does. :wink:

What I mean is that we seem to have constructed a framing of the situation in which the physical processes of the brain are entirely responsible for all observed behaviors. But what is the mind, then? If it’s an abstraction – a purely informational structure – the concept of a thing that thinks, that reasons symbolically, then I think we must admit that simulating a brain creates a mind, in the same way that starting Excel creates a spreadsheet.

But taking the position that simulating the physical behavior of the brain does not instantiate a mind necessitates that the mind is not just an abstraction. But if it’s not an abstraction, and it is not a concrete thing, what else is left but that it does not exist?

Not sure of your intention, but your program proved my point.

I agree with that. Did I assert anywhere that a machine could understand everything? Maybe you are responding to others, but I would agree such a machine would use a really cool laser type device.

Not sure what you are getting at. There is certainly an immense amount of data out there, and no human now, and possibly ever could know all of it. But a computer could hold all that knowledge using current technology. I doubt it would understand the knowledge currently though.
.

It’s a way of understanding “understanding”. And like all ways of understanding, it’s provisional, useful for certain contexts and useless for others.

It’s a mistake to think that there’s “one true theory” of ANYTHING. The best we can do is construct models that accurately predict future behavior. No matter how useful or interesting our current model is, there’s no guarantee that there isn’t another more useful or interesting model lying out there waiting to be discovered. Or even that our current model correctly reflects the underlying structure of reality. Who knows if there really are such things as particles and forces? They’re merely convenient ways of organizing our thinking to make useful predictions.

I would say, rather, that its a very bad idea to think of a definition as something that is “true”. Definitions are a way of imposing a framework on reality to make it more tractable. There is no “tableness” inherent in the particles that make up a table. We arbitrarily define a certain arrangement of matter as a table for our convenience. In that way the definition of what a table IS, is, at the same time, a theory about what a table DOES. It reduces the inherent complexity of billions of particles to a very simple prediction of properties. The point of equating definitions with theories is not to strengthen the “truthiness” of theory, but to weaken the “truthiness” of definition, to emphasize that when we define what something IS, we’re making a provisional claim of uncertain truth value.

Yes, that is why I included the words “and understand” in the sentence you quoted.

To that last question, I don’t know, or at least, I do not have the sort of precise, operationizable explication that would be much immediate help to you in building an understanding machine. (Though I could point you towards a large - and very sophisticated, abstruse, and fractious - philosophical literature that attempts to grapple with this issue.) But that is precisely the point. The problem of creating machine understanding is very hard largely because we do not clearly understand what the problem is, what understanding is, or what success would look like. I think it is clear enough, however, that it would not look like success if you invented some arbitrary definition of “understanding” of your own, unrelated to that one, and declared that you had made a machine that understands when you have made one that does what you have chosen to call understanding.

I rather think it is splitting hairs, but it does not matter because it is not my position that no machine is (or, rather, will ever be) capable of understanding.

Well, that would be nearly all the people who have been generally acknowledged to be the smartest around (and most of the dumb ones too) over the whole course of human history up until about 150 years ago (and many people widely acknowledged to be pretty darn smart since). You have just written off not only Aristotle, Plato, Augustine, Aquinas and Ockham, but also da Vinci, Copernicus, Kepler, Galileo, Leibniz, Descartes (a very important scientist and mathematician, for those who do not know), Newton, Lavoisier, Faraday, Lyell, Darwin (for most of his life, and maybe all) and countless other acknowledged geniuses in all fields of human endeavor, as not so smart. You must be pretty sure of yourself.

Those beliefs are not comparable. Do you know of any rational (never mind smart) adult who has held, and maintained in face of skepticism from others, that Superman really exists?

No, that is both facetious and not valid, and it is why I was careful to use the world “build” rather than “make” to try and forestall this diversion. First of all, we make other human beings without knowing in detail how they are constructed, what they consist of, and how they come to be what they are. Indeed, we can, and some do, make them without and clue about these things. By contrast, to make an understanding machine, you would have to know exactly what you were doing.

More importantly, however, as I have already argued in this thread, to take it as a premise that a human being is a machine is to beg the very question that is at issue. (And, incidentally, if components somehow just fell together by luck to create an understanding machine, without you knowing how they have done so, you could not know for certain that that object was a mere machine either.)

I think you probably could create a chocoholic that way, but I do not know it, and neither do you. I will agree that if we built something like this, duplicating an actual natural born chocaholic at the atomic level (and perhaps even the subatomic - after all, such things as where certain ions and electrons are in the brain is surely going to matter to this beings mentality), and if we were able to make sure that nothing spiritual could have slipped in whilst our backs were turned, and if it did turn out to love chocolate, then that would be evidence that machines can enjoy chocolate (and if they can do that, I will not fight the inference they can understand stuff too). The trouble is that we have not done that, and we do not know that we really could.

Well, this is a different argument, as you have moved from machines to computer programs. The problem with computer programs (I assume that you mean programs actually running on computers), is not that they are limited in complexity (they are not, so far as I know), but that programs as such have very limited causal powers, whereas human beings, and indeed, brains, have much greater ones. Hook you running program up to a good set of receptors and effectors and I will concede that (if you got the program right, and hooked it up the right way) you may (note may) have the makings of a machine that is capable of understanding, or even enjoying chocolate. Indeed, although I do not think anyone can be sure, I am inclined to believe that it is probable that this could be done, and that it is a very worthwhile project to try to do so. It ain’t going to be easy, though.

No, I can’t accept that. There is no question that humans (well me, anyway, but I am confident of you too), can understand at least some things. If that were not the case, we could not be having this conversation, and the words and letters we have typed would not be words and letters, but meaningless patterns of light and dark, and you could not even aspire to build an understanding machine. It would be impossible to even raise the question of whether humans (or machines) have understanding. If it can be proved that no machine can ever understand anything, then it will follow that human beings are not machines. (Do not worry, however, because I strongly suspect that, even if it is true, it would be even harder to prove that no machine could ever understand anything than it would be to prove that some machines could have understanding. At least you could prove the latter by actually building such a machine.)

I am inclined to agree with that (so long as “all” really means all).

As I have already indicated, I do not have anything much better to offer at the moment than the dictionary definition you already quoted. I am not saying it will do; it needs to be refined and clarified a lot, but it is a good, non-question begging place to start from.

That can’t be right. You can’t fool beings without minds.

Anyway, a computer, as such, can’t simulate the brain’s every behavior. My brain can do things such as causing my arms to move. A computer can’t do anything like that unless it is hooked up to some mechanical arms, at which point it has become a robot.

Otherwise smart people can believe dumb things. This doesn’t diminish the greatness of their accomplishments; but neither does the greatness of their accomplishments indicate any reason to assign credibility to whatever irrational beliefs they had.

No. But a large number of people adhering to a belief does not constitute any kind of support of the belief’s validity. It’s as irrational to believe in God as it is to believe in Superman.

Come now; you’re just playing at the edges of all practical certainty. How could we ever know that the Supernal Arbiter of Sapients, Rational Beings, and Assorted Dinkum-Thinkums had not witnessed our creation of a suitable mechanical shell for consciousness and slipped in a dose of spirit at just the moment when the switch was turned? We can’t know that, of course. You might as well say that we cannot completely prove anything. Again, you’d be technically correct, but guilty of taking an intellectually perverse position.

If the universe is computable, there is absolutely no difference. :wink:

I will grant that we don’t know whether the universe is computable, and we may never be able to prove it either way. But so long as we’re talking about what we can attempt to accomplish with symbolic computation, why not assuage your objection by simulating the environment as well as the agent? Would you admit the possibility of a digital agent enjoying digital chocolate?

This is the equivocation I was talking about. You deny that we can be sure that machines are capable of understanding, and you even say that you are not sure what “understanding” would mean in that sense. But then here you claim as unquestioned fact that humans are capable of this undefined power.

You seem to be defining “understanding” as it suits you. Sometimes you seem content with a behavioral interpretation: otherwise why would you say that, without understanding, we could not be having this conversation? But the rest of the time, you seem to insist that we have to acknowledge the possibility that there is something para-physical to the complete package of understanding, something that goes beyond the simple process of responding to stimuli in a way that another intelligent being would regard as intelligent.

Well, permit me to disagree. This conversation is a physical (and probably deterministic) process of matter and energy. It would be what it is whether or not some observer calls the involved processes “understanding,” “semiosis,” “communication,” and so on. If there could ever be any justification for denying that the same transfer between two robots constitutes “understanding,” I deny it of our conversation right now, and would respectfully ask that you provide, at the very least, some suggestive evidence that there is any kind of understanding occurring, or even any subjective experience at all.

After this conversation, I am not so sure of that power of proof. What’s to stop you or anyone else from looking at it, acknowledging that it performs exactly as an understanding being would be expected to perform, and then insisting that it’s not really understanding anything, just going through the motions? :dubious:
(Also, should this thread be moved to GD yet? :smiley: )

It is a sentence. You refer to it independently of the previous sentence which
it references. I have not taken your comments out of context. If that is how
you understand the way to interpret the meaning of words, then all is lost in
communicating with you. Let me state more clearly:

“I understand what Searle claimed, and if it were true, then Searle proved
that he doesn’t understand anything”

  1. Searle contends that the Chinese Room does not understand Chinese.
  2. Searle claims there is no way to distinquish the difference between the Chinese Room and a human who understands Chinese through reasoning.
  3. Searle does not limit his theory to the understanding of the Chinese
    Language, instead, he contends this applies to the understanding of anything.
  4. I assert Searle has a brain.
  5. I assert the human brain is a machine, unless it has some supernatural
    nature.
  6. Searle is a Chinese Room, a machine which appears to understand something,
    but doesn’t according to Searle.
  7. If Searle is correct, he doesn’t understand anything.

You could refute this simply by demonstrating, or showing scientific evidence
of the supernatural. You haven’t. You claim their are books that do. Show me
the scientific evidence from those books. You claimed that there is evidence
of the supernatural, because people have said they experienced it. If that is
the sum of your evidence, and you belief that can be used to prove anything,
other than itself, that stand up clearly and say so. You can also say that Searle demonstrated there is something different between the Chinese Room and a human, but he didn’t. He kept saying it was not the same thing, without offering evidence of anything that made a difference in the result.

"Also proven that he doesn’t understand this subject if my contention that it is false, is true’.

  1. ‘Also’ makes the connection of this sentence to the first one.
  2. ‘proven that he doesn’t understanding anything’ is a conjecture qualified
    by the word ‘if’ following.
  3. ‘my contention that it is false’ refers to Searles claim referenced in the
    first sentence, and is the qualification of the phrase, ‘proven that he doesn’t understand this subject’.
  4. ‘is true’ qualifies the logical conjoined assertions in 2 and 3.
  5. If Searle’s contention is false, I assert he doesn’t understand this subject. One does not necessarily follow the other, and I don’t know that Searle hasn’t changed his mind. But if he believes his assertion is true, and it is actually false, then I feel justified in saying he doesn’t understand this subject.

If you believe that a person who makes assertions about a subject,
understands the subject, even though his assertions are false, in the manner
described here, stand up and say so.

  1. You asserted as fact that there is evidence of the supernatural.
  2. The assertion of something in fact, requires scientific evidence, in order to use that assertion as part of a logical conclusion. The possibility of something in a logical conclusion cannot be based on the idea that ‘anything is possible’.
  3. You have not provided one shred of scientific evidence of the existence of
    the supernatural. If you have some present it here. Don’t give me a reading list, I do not have to lift a finger or click a mouse to help you prove your point. Stand up and show it to us.
  4. You repeatedly cite philosophy to support your conclusions. Philosophy cannot be used to prove anything. When philosphers did prove things, they did it with Science, not by Philosophy. To argue otherwise, you would have to show that there are right and wrong philosophies. I can dream up a philosophy that
    says 1 does not equal 1. I could use that to disprove anything you, I, or
    anyone else says. You can prove that the Philosophy of ‘1 does not equal 1’ is
    wrong demonstrably incorrect, but only by using Science, not another
    Philosophy.
  1. “I think that what both sides of the Chinese Room debate often miss is that
    to construct a Chinese Room as Searle envisages it (i.e. a computer with
    entirely symbolic inputs and outputs that can pass a rigorous Turing test, in
    Chinese or any other language) may simply be impossible.”
    That is an opinion. Do you claim it is a fact?

  2. “If I (and the many cognitive scientists who think likewise) am correct to
    think that understanding depends on the capacity to have rich interaction with
    the environment, then no such device (which is envisaged as having an
    extremely impoverished interaction with its environment) can understand
    anything, and will not even be able to fake an understanding very
    convincingly.” ‘understanding depends on the capacity to have rich interaction with the environment’ is opinion. You showed no evidence or even a rational argument to demonstrate this (unless you think a claim that some unnamed person agrees with you is evidence). ‘then no such device (which is envisaged as having an extremely impoverished interaction with its environment) can understand anything’ is an attempt to draw a logical conclusion from an opinion. This does not even rise to the level of opinion. It is a worthless succession of words. ‘and will not even be able to fake an understanding very convincingly’ is another attempt to make a fallacious conclusion. If the previous statements had been valid, this would have been, but they were not.

Here is a logical equivalent of your statement:
My mother thinks that I am the best person on earth, so nobody can be better
than me, and they will fail if they try.
Are you saying I can validate an assertion using that statement?

In the middle of all that, you threw in this:
‘which is envisaged as having an extremely impoverished interaction with its
environment’. I’ll concede that you may have envisaged that. Is that the fact that I missed?

You said, ‘The paradoxes (or irreconcilable intuitions) that the argument seems to lead us to, arise from the fact that we have accepted the incoherent premise that
such a system could be built.’
I assume you are referencing this - ‘to construct a Chinese Room as Searle envisages it (i.e. a computer with entirely symbolic inputs and outputs that can pass a rigorous Turing test, in Chinese or any other language) may simply
be impossible’. Another opinion, without a rational argument or evidence for
support.

All paradoxes are reconcilable and evidence of faulty reasoning. If you find a
paradox, you have faulty reasoning. I find no paradox in your statements, just the faulty reasoning.

Aside from your ‘vision’ which I will assume you had for the sake of this argument, what facts have you presented?

[quote=“ed malin, post:51, topic:540960”]

Show some evidence somewhere of the necessity for a rich interaction with the environment as a necessary part of the process of
understanding,[\QUOTE]

Give me one piece of evidence or a rational argument. Stop talking about
reading lists.

‘it is a fact that the only beings we know to be able to understand anything are human beings’ is a supposition until you present the coherent definition for ‘understanding’.

I define ‘understanding’ via the Turing Test standard, and would argue that it
has shown non-humans can understand some things. I do not claim that it has
demonstrated that a non-human can understand all that a human can. But I have not seen a rational argument or evidence that it could not. If you have different definition of ‘understanding’, present it, and I will address that.

I have created ‘machines’ which perform complex operations not understood
by most humans, but by some, and could pass a Turing Test that would
demonstrate a small level of ‘understanding’. This is not proof. But I pose,
as a rational argument, that if a small level of ‘understanding’ can be
achieved by a machine, a higher level of understanding could also be achieved
in that manner. That seems to me to be the process used by humans to develop their own higher level ‘understandings’. I cannot prove that all higher level ‘understandings’ of humans, are composed by combinations of lower level ‘understandings’, or that it is the actual process used by humans, but I have not as yet found that contradicted by logic and evidence. I would refute my own statement in a moment, if such contradiction were presented.
Stand up and show me the evidence to refute my argument. If you cannot, then my argument shows a rational reason to why your argument is false. I have seen many opinions, that that contradict that, but they present no evidence. All of them make an assumption about some process or component of the human brain that is not understood, but do not provide logic and evidence to show that such a thing exists. Just because you, or I, or anybody else doesn’t understand how the human brain achieves ‘understanding’, does it point to a the existence of something more than the combination of known processes and components. There are many unknown and unknowable things in the universe, and possibly outside the universe, and I don’t know that any of those are not a part of ‘understanding’, but if they are, they would be the first case where such a thing happened.

You did say it. You qualified if with an if. But you used it as the basis for one of your false conclusions. Why did you do that if you didn’t believe it? If you have a good reason, say so. You did not pose it as part of any counter argument, or for any other reason that you stated, except as an opinion which you attempted to use as the basis for an argument that a Chinese Room cannot be built. Are you now saying that a Chinese Room can be built?

I imagine you think otherwise because you used that as logical basis to validate your otherwise baseless opinion that a Chinese Room could not be built. Your argument depended on exactly that, and you said you had a ‘vision’ of that. I am glad you do not believe your ‘visions’ are true.

I am interested in the scientific study of the mind. How on earth is that relevant to a concept that non-human ‘understanding’ does not, or could not, or will not exist? I haven’t got your views backwards. Your views are logically ‘backwards’, as I have demonstrated repeatedly, and you have yet to stand up and show any evidence to refute it. I haven;t made claims about anything you conclude, only what you said, and how those things you said can be demonstrated to be false. I have done so because you have not even bothered to get one piece of evidence or use a rational argument to disprove what I have said. I should just make up things and assert them because you don’t even bother to refute things.

Feasibility is an educated guess. After something is done, it is proven. You clearly have very little understanding of the problem. I don’t. The problem is simple. We have to examine the process of human ‘understanding’ in much greater detail, instead of speculation about supernatural powers. The complexity of a problem affects the feasibility of its solution only if it exceeds the capacity to carry out that solution. This why those who claimed that landing on the moon was not feasible turned out to be wrong. You don’t seem to understand the capacity of humans and their ‘understanding’. If you had evidence or a rational argument to show that the solution to the problem of understanding ‘understanding’ exceeds the ability of humans, stand up and show it.

More nonsense. That implies that there is a unique process for each thing the brain does. I would only need to find the process by which the brain enjoys anything. Without going into it all, I have the OPINION that understanding the joy of chocolate is one of the simpler parts of ‘understanding’. I could easily be wrong though. One trouble is that insight is an inefficient way improve understanding of ‘understanding’, or anything else. Newton did not get a brilliant ‘insight’ into gravity by watching an apple fall. He used that observation as a small part of the process of combining simpler principles proven out previously. Einstein didn’t have a brilliant ‘insight’ into relativity, rather the opposite. He painstakingly combined simpler but known processes to arrive at the conclusion. Refuting you has taken nothing near the ability of those men.

Thank you. ‘Insight’ is a concept of magic. That the brain doesn’t internally use logic to ‘understand’ things. If you would stop looking for ‘insight’ and train yourself to use the logical ability that your brain already has, you would have understood everything I am trying to explain to you already, and not wasted your time with whatever it is you have done.

You said ‘By contrast, there is currently no evidence whatsoever that anyone could ever build a machine that could enjoy chocolate, and to claim that you know that you could do it if only you had the time and resources is simply a lie’
You clearly implied I was lying, but your statements are false. I have provided plenty of rational arguments and evidence that a machine could enjoy chocolate. You haven’t disproved an iota of it. Disprove that a human brain is machine based on the laws of physics and then you could claim that machine could not be built to replicate its capabilities, or even that it might not. If you can’t do that then my argument is correct. I didn’t say I had the time and resources to do it, but you haven’t shown any evidence that there is not enough time or resources to do it. My claim is logically consistent, not disproven by anybody.

(continued in the next post)

[QUOTE]

Do you wish to assert either of those claims that I have have asserted to be
false in the previous two paragraphs. Before you commit yourself, I suggest
you read them again, with particular attention to words like any, not, if,
possible
, and for sure, and noticing the absence of words like
proof. I do not think that, once you have made a successful effort to
understand those claims, you will want to maintain that they are true.

[QUOTE]

I cannot prove your motives, I am stating an OPINION here, but I believe you carefully crafted the way you conflated your false claims (including your false prediction that I would misrepresent your own words) with my correct statements to call me a liar. I do not rely on opinions or speculation. I have explained to you that logic can determine what ‘understanding’ is and how it can be achieved by ‘non-humans’. I have readily asserted the only things necessary to refute my conclusions, and that I would admit my mistakes shown those things. I provided all that was necessary to refute your false claims and conclusion, without telling you to read something else. You have failed at every opportunity provide any evidence to the contrary, only opinion and false conclusions, and insults.

You have been presented clear irrefutable logic to deny your basic premises that ‘understanding’ is not understandable. You did claim that there is scientific evidence of supernatural powers, when you used your non-scientific evidence to claim a logical conclusion. Your implication of dishonesty on my part, I believe will be obvious to those who read your words.

Ho hum, well things are going about as well as I would expect.

Just to muddy the waters a little.

We have some very limitted understanding of how our brains represent some aspects of perceived reality from a neurophysilogical point of view. There is at least some good reason to think that the circuits are partially hardwired to represent and manipulate a symbolic analogue. So extent that idea.

Imagine we have pretty much worked out how all our DNA encodes our being, and how it programs the initial state of the brain. Now imagine we have a pretty good understanding of the mechanisms by which the brain constructs its representations of physical reality. Further, we had finally worked out a generally agreed grand unified theory of everything. Could we construct a set of patches to our DNA that coded for additional brain circuits. Circuits that are able to create internal representations of the needed additional dimensional realities, and able to operate within them in the same manner as we simple humans do in 3D space. Also, add whatever weird stuff we much need to process whatever additional mathematics are needed to manage the dynamic of those dimensions. Then we gestate and give birth to this new being. Could they understand the nature of everything? (Where everything is the fundamentals of the universe’s makeup.)

Well, what is the necessity for the additional hardwired mechanisms? We already do a pretty good job of understanding everything, well everything we know about, which is self-defining. So it sounds like your working from the presumption that there are things we don’t know about, based on limitations in our perception. That could be, but then there still might be things we don’t know or understand due to the limitations of that new hardwired ability to percieve. So I’d say that any additional ability added to the human brain, is no more likely to allow the ‘understanding of everything’, than no such additional ability. Of course there might just be one additional ability necessary. From a cost/benefit angle, I’d say we might as well try one level deep to see what happens. Do you have any ideas how to go about that?

Even AI specialists can’t agree on what “understanding” means. There was a time when it was thought that mathematics was the pinnacle of human achievement. For a computer to be able to prove a non-trivial theorem was surely a sign that the computer was in some sense intelligent, and had a deep understanding of mathematics. Then EQP automatically proved the Robbins conjecture, an open problem for decades that had stumped the likes of Tarski.

Thereafter, it was collectively decided that the Robbins conjecture was too symbolic, all EQP was doing was symbol pushing (how does that differ from real mathematicians in essence?) etc. so the achievement doesn’t count.

Computer chess was the same.

I just looked at the Robbins conjecture, and it looks to me like I could prove it incredibly easily using boolean algebra*, and that’s something I haven’t done for decades.

Is there a reason I couldn’t?

*Or, fuck it, truth tables - seriously there are only 4 cases I need to consider!

Edit2: am I looking at right thing: Robbins Conjecture -- from Wolfram MathWorld ?

Edit3: I’m a fuckwit, I was trying to prove the Robbins Axiom. Now you see that I haven’t done this for decades :smiley:

That depends on the properties of the universe in question. An infinite universe containing nothing but Lego bricks would be fairly easy to understand, as long as understanding isn’t defined obtusely as needing to know all possible assembly permutations of an infinite assortment of Lego bricks.

Our universe may be infinite in size (although I think it’s currently understood to be finite, but boundless), but I don’t believe it’s infinitely complex. I’m not even sure that infinte complexity is logically possible.

I think it is possible to describe an infinitely large, but fairly simple system within a finite space.

Yes, you’re looking at the right thing. The Robbins conjecture was to try to prove the following sets of axioms:

Huntington:
x OR y = y OR x
(x OR y) OR z = x OR (y OR z)
!(!x OR y) OR !(!x OR !y) = x

Robbins:
x OR y = y OR x
(x OR y) OR z = x OR (y OR z)
!(!x OR y) OR !(x OR !y) = x

Are equivalent. As the Huntington axioms were already known to be a basis for Boolean algebra, this would imply that a Robbins algebra was also a Boolean algebra. This problem had been open for decades until EQP found the following proof using paramodulation:

You can read the full paper in the JAR here.

The point being, the history of “AI” research has been a history of moving goal posts. Skeptics claim computers will never be able to do “X”, as it obviously requires some deep understanding of the subject domain and real intelligence. Eventually, computer scientists build a machine that can do “X”, only to find that everybody agrees that “X” doesn’t require any sort of machine intelligence at all.

This is why “strong AI” (i.e. building a machine with real human intelligence) is impossible, IMO. Not for any technical reason, but for social ones: nobody will agree that the machine is intelligent, even if one is produced.

I agree with your statements, except for one word in the conclusion, ‘nobody’. I’m sure there will be somebody (many somebodies probably) who insist that a machine is not intelligent, even if they are not able to distinquish between the machine and a human. But many people will accept it. There may be much confusion with people who can’t distinguish the terms ‘intelligence’ and ‘human’, and the types who are unhappy about a machine being more intelligent than themselves. The proof may come from future ‘Luddites’, who will (attempt) to destroy these machines, because they do consider them intelligent, and a threat to humans. Ironic, because I consider Luddites to be a threat to humans.

I was trying to second guess some of the motivation of the question.

There is an assumption in the question that we are unable to understand, but a machine might. A common thread discussing issues with our inability to grok a lot of modern physics seems to hinge upon our limited ability to wrap out heads around greater than three dimensions. Sure we might manipulate higher dimensions, but “understand”? Well that seems to be another semantic argument.

Hard to disagree here. I have absolutely zero idea how we do it. The question was intended to try to side step the AI argument, by building upon a being we already believe has “understanding”. Ourselves. Either we do (or can) “understand” the nature of everything, or we don’t (or intrinsically can’t). If we don’t - can we be modified so we do?

So, if we believe we have a mathematical representation of the fundamentals of the universe. Say some hideous ten dimensional mess that we simply cannot visualise. Can we augment our 3D processing to the point where it can cope with a 10D world? Could that enable us to have an inate “understanding” of the physics, in much the same way as we ordinarily understand the 3D world of Newtonian physics? I’m deliberately making all sorts of limitations and cutting corners here - but doing so might make for a more productive argument than going down the usual tired AI route. (I remember when Roger Penrose visited promoting the Emperor’s New Mind. Some of my colleages - one in AI, another a physicist turned philosopher tried to give him a hard time. But it all gets a bit stale. Roger is a nice guy, and very smart. But he needs to stick to physics.)

In that there is always something to learn, we never ‘understand’ anything. But if there is a level of ‘understanding’ that we can’t achieve, then we’ll never know about it anyway, unless some unknown entity(in the scientific sense of unknown), or a machine that we make, tells us.
I should point out that I think the concept ‘understand’ is overrated. I’d say our ability to manipulate, in order to derive the ‘understanding’ of things we can’t wrap our head around, is more ‘understanding’ than simple ‘visualizing’. We could discuss this more if you like.

I’m not sure I want to alter my ability. I kind of like it, warts and all. But IMHO, the best route to that end would be simulation of the human brain (and whatever other parts of our bodies are necessary). When its done, if you want to ask the machine how to modify yourself, feel free!
I hope you are young. I’ll consider myself lucky to get another 20 years, and it doesn’t look that promising. But if you have 50 years to wait, or more, who knows?
I think the concept of ‘understanding’ the universe within our ability to comprehend ‘understanding’, can be defined by a small set of suppositions. More along the lines of ‘understanding’ being no big deal.

As for AI, its interesting, but since we have the roadmap in the brain, I’m guessing that will get us there faster than inventing a new form of intelligence and ‘understanding’. But I may be overly pragmatic in that regard. More we can discuss if you like.

Sorry I haven’t replied to this yet.

I’'ll respond to this at the end because I think it’s your most important point, and will take the longest to reply to.

That’s not the argument. It is supposed to be intuitively obvious that the person in the Chinese room doesn’t understand Chinese–even though the person is executing the purportedly correct program, whatever it might be. From this, it follows that program execution is not sufficient for understanding.

This is not true. We know that it enjoys chocolate because it exhibits certain behaviors and it belongs to a certain natural kind–a kind that we know to have evolved to enjoy and not enjoy on the basis of a sense of taste. There’s a lot of background knowledge that goes into our inference about the thing enjoying chocolate. People are not behaviorists about this–they make assumptions about natural kinds. But when we’re talking about a machine that some human being programmed, all bets are off. We no longer have a basis to think that the behavior exhibits enjoyment rather than something else. It may look just like enjoyment so far, but since this thing isn’t of a kind with animals, we don’t have any reason to think its behavior will continue to resemble enjoyment.

X can imply Y without X having to be defined in terms of Y. Several trivial examples immediately come to mind. So I’m not sure where you’re coming from here.

When we predicate anything of anything we’re not just characterizing its past behavior but also projecting future behavior. I’m not sure how to argue for this, since it seems obvious to me. Can you think of a counterexample?

When it comes to human beings, to our observations of behavior we add in a lot of background knowledge (or, more probably for most people, instincts and traditions) about the human’s being place in the biological and social scheme of things. All of that is inapplicable to a machine exhibiting behavior programmed into it by someone. We can’t use any of our biological and social assumptions when evaluating machine behavior.

But it is relevant to whether we are licensed to make predictions about future behavior. And, as I said, every time we predicate any attribute to any object we’re not just describing what we’ve seen but also predicting what it will do.

Now to your first point:

What you’re suggesting has been characterized as the “virtual person” reply to Searle. You’re saying it’s a mistake to insist that the human being who has internalized the Chinese room is the only candidate for understanding. By virtue of the human’s executing the program, a sort of “virtual person” is created, and that virtual person understands even if the human being does not.

My reply to that is to point out that the virtual person does not exhibit understanding-like behavior. The most salient illustration of this is the fact that is that the virtual person’s execution is fraught throughout with “halt and return control to the controlling system (i.e., the human being)” states. This is extremely atypical of any system we know of that understands things, and gives us reason to suspect that the virtual person isn’t an understander.

I doubt you’ll buy it when I put it that baldly. To work up to it a little more slowly:

Imagine two turing machines. One simply takes two strings and adds them together. The other does something else (whatever you like) and occasionally goes into a state which simulates the adding machine. The simulated adding machine adds two strings, then returns the output to the instantiating machine.

It is typical to say that the simulated adding machine is computationally equivalent to the “real” one, and so there is no important ontological distinction between them.

Of course I have to grant they’re computationally equivalent–that’s comp sci 101. But I don’t grant the inference. Just because they’re computationally equivalent, this doesn’t mean there’s no significant ontological distinction between them. To explain while, I’ll note that while they are computationally equivalent, they’re behaviorally distinct. At the end of the process, the “real” adding machine halts. But the “simulated” adding machine doesn’t halt–it returns control to the instantiating machine. Those are different behaviors. So there’s an ontological distinction between them. I doubt anyone would deny this, but I’m just highlighting it.

But in this example, it seems clear there’s a coherent system which mostly exhibits the same behaviors as a real adding machine, and it would seem stubborn to say that it’s not an adding machine of some sort, even if it is “simulated”.

Now imagine a different example. As before, there’s a machine which sometimes needs to add numbers. So it goes into a routine which often adds numbers together. But it doesn’t run the routine all at once. Rather, it runs a step of it, then runs steps in a bunch of other routines, then returns to run a step of the adding routine, then runs steps from a bunch of other routines, and then returns to this one, and so on. And at any time during this, the instantiating machine may “decide” not to return to the adding routine, leaving it unfinished.

There is something happening here that’s computationally equivalent to an adding machine.

But behaviorally speaking, the parts of the machine that actually execute the adding operation (when they’re allowed to complete it!) exhibit behavior widely divergent from adding behavior. If they get all the way through the routine, what comes out is equivalent to addition. But what comes out is only a small part of the description of this simulated machine’s behavior. Also included in a complete description of its behavior is the way it constantly returns control to the instantiating machine, and the way that at any moment it may cease the adding process due to factors completely external to its (the simulated adding machine’s) own internal description. The simulated machine behaves extremely differently from a real adding machine. They’re computationally equivalent, but ontologically quite distinct on account of the very different behaviors they exhibit.

The same applies to a virtual person. It may be computationally equivalent to a real person when allowed to proceed all the way through its routines. But importantly, however you physically mark out what constitutes that virtual system, that physical network of objects behaves very differently from a person. Real people, for example, don’t return control to instantiating systems. (If it turns out we’re all in the Matrix, then of course, all bets are off. But we have to assume we’re not if we want to talk intelligibly about this.) Real people behave in ways that can be explained by their interests and epistemological state. Virtual people behave in a way that must be explained using a lot of reference to some other machine’s states–some other person’s interests and epistemology if it’s a person instantiating the virtual person.

The two kinds–real people and virtual people–require very different kinds of explanations because they exhibit very different kinds of behavior. They are ontologically distinct.

This doesn’t show that the virtual person fails to understand, but it does show that we can’t simply assume it does understand based on it’s behavior-if-it’s–allowed-to-continue-its-operations. Fopr X to understand is, in part, to behave by and large in ways best explained by reference to X’s interests. But a virtual person doesn’t behave by and large in a way that is explicable in terms of the virtual person’s interests. Its behavior can be thus described only if we abstract away from a lot of its actual behavior. That is “as-if” behavior, abstracted from actual behavior. This should lead us to suspect that it has “as-if” understanding, not real understanding.

That’s me speaking on behalf of Searle. To clarify something–I actually think a computer running the correct program can thereby understand Chinese. I think that a computer running a program and a human being “running a program” are doing very different things. What the human is doing does not subvene genuine understanding. What the computer is doing, though, can. It’s not computation that’s sufficient for understanding, it’s being controlled in the right way. Too quick, I know. I just want to make sure people know I’m not actually a Searlean about this. I just think he can give better answers than people generally realize.

The explanation of a real adding machine’s behavior can be given in terms of its inputs and our understanding of addition. The explanation of a simulated adding machine’s behavior relies on a lot more than this

Not trying to horn in on your conversation with Stealth Potato, but I’d like to understand your points.

It follows if you limit program execution to its parts, but not the result. So, OK if that’s how you define it.

Here you veer off with this concept of a natural type. You seem to be using the definition that only humans can achieve ‘human understanding’ because if anything else achieves something indistinquisable from ‘human understanding’, it’s still different because it’s not human. Define it that way if you want, but what is the point if two things are indistinquishable?

Didn’t understand what Stealth Potato was getting at either.

Ditto

Why not? We are talking about a machine that is indistinquishable from a human. It would have to have that same set of background information to work with. And the biological and social assumptions are the way we would try to determine if the machine is human or not.

Here you assume that there is something distinquishable between the human and the machine. You can seperate ‘understanding’ and ‘human understanding’ based on that, but how does that apply to ‘understanding’ as a general concept? (I contend there is no difference, since obviously humans ‘understand’ through different methodologies already).

You are saying that the Chinese room works through a series of steps. Are you saying humans don’t? And how does the observer detect that is how the Chinese room is working internally? Sounds like an assumption that the Chinese room process is defective. I don’t think this ties to your next point, but maybe I’m missing something.

All good up to this point.

You’re adding ‘mostly’ into the argument. This is about two things exhibiting ‘all’ the same observable behavior.

All of that is more of the ‘same results - different means’ argument. The observer has no idea what the means are, otherwise he wouldn’t have any trouble telling the difference between the human and the Chinese Room. If you can’t examine the mechanism, only the results, and the results are indistinquishable, then the mechanism doesn’t matter to ‘understanding’.

Sorry, I don’t understand that all. What makes a simulated adding machine any different unless it uses a means humans don’t understand to derive the output from the input. And in that case, it means humans can’t achieve ‘machine understanding’. I think Searle made the mistake of assuming that a human understands Chinese by way of some means or mechanism substantially different from the process used by the guy in the room. That’s where everybody seems to go wrong (IMHO) in this argument. What rational argument or evidence is there to show human ‘brains’ work differently than machines? They both need input and processes. I can’t prove there isn’t something else, given the time and resources I have available, but I’ve never seen anything that would indicate otherwise, except for the conjecture that there ‘might’ be something else.

I’m not slamming you here, I’d like to see if there was something I didn’t understand in your argument. Personally, I don’t see where ‘understanding’ has much to do with the means or the mechanism. ‘understanding’ is something that can only be defined by the result (again IMHO).