What does the Incompleteness Theorem imply?

I assume that you refer to this:

I said “prove it” because I simply cannot tell what you mean by that statement, and I think that if you have to set out in a semi-rigorous fashion a demonstration of what you intend then it might make some things clear that seem horribly fuzzy under you present wording. (“emulated by basic physics (which we know from experience is more than capable of generating arithmetic)”, for instance). I don’t really know whether I agree with the statement (though I suspect not) because I cannot resolve the references. To me, “physics” is a system, and systems are used to evaluate statements. Emulations are models bult for either practical purpose or to assist in an examination of a system. Now, I am by no means declaring these usages to be cannonical. That’s just how I understand the terms.

So, I have no idea what you mean by a system that can be considered to be emulated by [a system called physics] which we knoe from experience . . . And, to be perfectly frank, I am afraid that you might not know precisely what you mean either, since the consequences that you assert for GIT just are not supported by the actual theory. Fuzziness in reference and/or implication is nobody’s friend once Godel’s name enters the conversation.

Well, here’s the thing. I didn’t really ask that question, but I will now. What exactly do you mean by model arithmetic. I usually take the construction of models to be a product of an intelligence. A model is only a “model” because a conscious mind interprets it to be a representation of something else. Otherwise, it’s just a thing unto itself.

So, in what manner are you saying that the physical world can “model” anything?

I think you are drawing some unwarranted conclusions about complexity. The reality underlying a Von Neuman architecture is a series of binary electric potentials, yet I can use such a machine to create very complex models.

I see no reason to conclude that the “underlying reality” must always be more complex than the “overlying objects”. Do you have some argument why complexity must always increase as we approach “underlying reality”?

Simple non-linear equations, when iterated, can yield outputs of arbitrary complexity – hence the field of chaos theory and fractals.

Okay, this is going to take a while.

begbert: Limited memory doesn’t decrease the vulnerability of computation devices, it increases their weaknesses. The whole point of Turing showing that the Halting Problem does not have a general solution was that it demonstrated that some problems cannot be solved even if we presuppose infinite memory and infinite processing time. Limiting memory and processing time reduces the set of solvable problems even more, although I’m not aware of any general method for determine precisely which problems are solvable in specific time.

And does it matter than Turing machines must use a specific set of symbols? How, precisely, are those symbols represented? For example, normal “computers” use pulses of electric current to represent the states 1 and 0. What do the Turing machines use?

Regarding the first point: the machine must then, of course, have a means of counting how many processing steps it had gone through, yes? How can such a subsystem be made so that it cannot be affected or disrupted? Can we determine whether such a system will necessarily stop for all input?

Regarding the second point: if the computer becomes trapped in an infinite loop of logic, that is disrupting the system. More to the point, if it’s always possible to present a representative system with a problem that it can’t solve, what happens when you present the system emulating the device with a problem? It will necessarily be the case that the system will be forced out of its bounds – the result will not be within the category that defined that system. In English, the system is destroyed.

TVAA, I apologize for taking a short tone with you earlier, but your methods of debate leave a lot to be desired. A lot of the claims you’re making are mathematical, and these require mathematical proofs. Simply saying that something is (obviously) true isn’t enough: you must prove that it’s true.

Actually, infinite processing time is disallowed, because you could solve the halting problem then. Although it’s generally not discussed because it’s terribly complicated, there is a theory of infinite-length programs. An infinitely long program can solve the halting problem for finite length programs, but not for infinite-length programs.

You need to appreciate the difference between a physical entity and an abstract model. The two are not the same: one can be destroyed, the other cannot. If you don’t believe this, please describe precisely a Turing machine that can be crashed–infinite loops are not sufficient, because we have all agreed that some Turing machines won’t halt on some inputs. You seem to be claiming something above and beyond that.

**

Nothing more than 1, 0, and B (the blank symbol) is necessary.

**

Replace each state of the Turing machine with n different states, each corresponding to one step of the execution. Adjust the transitions accordingly. From every nth state, transition to a trap state, from which no more computations are possible. This Turing machine will halt after n steps, and that is provable.

Interestingly, something exactly like this can be done to detect infinite loops that use a finite amount of memory.

**

Unless it is supposed to behave that way. Not all programs are meant to run once and stop. Operating systems and servers are two notable examples of programs that should run in an infinite loop. Please quote this paragraph and say that you have read it, so that I know I’m not just wasting words here.

** If you use a system to represent another, you have to use a subset of the relationships within the first system for the representation. Interestingly enough, if we postulate that a system can actually emulate another more complex than itself, the amount of information within that system becomes undefinable. (After all, the relationships within the emulated system must then be more complex than the emulating system, so we can then ask “What is the emulated system emulating?” and so on.)

No, they are related – they’re both systems of interactions. Any physical system can be considered a representation of an abstract construction, and (in theory) an abstract construction can be considered a physical system.

Just because you don’t perceive a relationship doesn’t mean there is one. You accuse me of merely repeating a statement without making a genuine point, but it seems to me that’s exactly what you’re doing. How do you support your contention that mathematics and physics are fundamentally different? A single example of physics being unable to generate mathematics would suffice…

Additionally, both the coin and yourself can be considered configurations of a matter-energy system. Input of particular sorts can change the configuration of that system so that it falls outside the bounds of the definition of “you”.

Okay, that was a bit confusing in English. Look at it this way: if you break a glass, what’s being destroyed? All of the matter-energy present in the glass is still there. Isn’t it only the configuration of the stuff making the glass that’s changed? Some changes result in a configuration that fits within your concept of “the glass”, while others result in configurations that are no longer compatible with “the glass”. It’s the same with you: the nuclear blast is merely one example of an input to the system of “you” that the system can’t handle.

The physics underlying the circuits is sufficiently complicated to model arithmetic.

The reality that lies “beneath” the calculator (for example, currently known subatomic particles in interesting configurations) involves certain interactions. These interactions also underlie systems that you’ve acknowledged do model arithmetic. Thus, GIT limits the behavior of the physical system, and thus the calculator.

GIT only limits systems from being able to make all truth assertions. When subatomic particles start relying on their ability to make truth assertions, then I might get worried.

Yes, it’s true that if you go about interpreting the arrangements/behaviors/properties of subatomic particles as self-referential assertions of truth, you’ll probably run across some arrangements which by the selected interpretion spell out to statements without determinable ctruth value. So what? It just means that your interpretation system is subject to GIT. The subatomic particles don’t care.

It really seems that the only way to make this work will be to take things in very small chunks. TVAA you keep making “big picture” pronouncements that seem to be extrapolations from a faulty understanding of GIT. There’s no way to be absolutely certain (though I am very certain) because you refuse to actually present the structured argument. So, without intending offense I am going to ignore any more broad pronouncements from you and try to get you to answer some very specific questions with precise detail. (Because precision and detail are absolutely required when applying any mathematical theorem.} I simply don’t see any point in engaging in a continued round of
[ul][li] “GIT doesn’t apply to X”[/li][li]“Yes it does, X uses/can model/has an interpretive similarity to arithmetic.”[/ul][/li]
So, in what manner are you saying that the physical world can “model” anything?

Please be specific. Exactly what meaning do you invest in the English symbol “model” that applies to the physical world absent any interpretive consciousness.

I’m with Spritus Mundi at this point. Your statment, above, is an example of your tautological thinking that keeps going around in illogical circles.

If there were a form of mathematics that we could conceive of, and you accept that we humans are part of physics, then you would say that physics generated any math we described to you. If we can think of it, then it is part of physics. You’ve defined an axiom, a=b, and said “using only this axiom, prove it wrong”.

You say a proof would be this: if we (physical entities) could describe a mathematics that physics couldn’t generate. To have a logical argument you need a linked series of different ideas that demonstrate a conclusion. You are not doing this.

You remind me of a story. A man in ancient India is arguing to a friend that there is no reality outside. Everything is in the mind, everything is a perception, even “danger” and fear are just false shams.

A snarling tiger suddenly leaps out of a clump of bushes! The man races up a tree, clingings to a branch with his teeth chattering and his body shaking. “What were you saying about there being no danger or fear?” says the friend. “Who’s afraid?” says the man in the tree.

You are mistaking the map for the territory.
Get thee to a logic class.

I’m not sure I know how to make this clearer…

First, I should note that anything that can be said in a mathematical proof can be said in English (with sufficient care for consistency). It doesn’t matter whether I say “the acceleration of an object is directly proportional to the force applied and inversely proportional to its mass” or “F/m = a, where ‘F’ is force, ‘m’ is mass, and ‘a’ is acceleration”. They’re fundamentally the same – only the appearance is different.

Generally, mathematics is much more convenient. However, there are some concepts that mathematics has not yet developed methods to handle, so English will have to suffice.

Bear with me for a moment. When I was in fourth grade, my science class was discussing how energy could be stored in the “tension” between atoms. My teacher suggested that we think of the interatomic forces as being like springs – since we’d had extensive real-life experience with springs, it was easy to conceptualize the interaction of the atoms by comparison. When I spoke to her after class, she pointed out that the properties of actual springs are a direct result of the interatomic forces in the metal. We had “understood” the behavior of the atoms by comparing them to something we were familiar with, but ultimately we were trying to compare the phenomenon with itself. “Things are what they do”, she told me – we could only ever understand what they atoms were doing by describing them directly, not comparing them with something else.

You’re familiar with the Game of Life, I take it? Each of the squares in the grid turns ‘on’ or ‘off’, depending on the states of the squares around them; this creates patterns that evolve and change with simulated “time”.

The classic Game is as follows: if an ‘on’ square is surrounded by two or three ‘on’ squares, it will remain on in the next iteration of the pattern. If an ‘off’ square is surrounded by exactly three ‘on’ squares, it will turn on in the next iteration. Otherwise, the squares are set to ‘off’.

For anyone’s who’s not familiar with them, take a look at this site. It has an Applet that runs Life and several other sets of rules; there are a wide variety of interesting patterns that have been discovered.

Mathematicians found that certain configurations of ‘on’ squares could manifest behavior equivalent to the rules that determined the behavior of the squares. You can see one such configuration by opening the Unit Cell demo within the Applet; there’s a short explanation.

This complicated structure can be considered to have two possible states: “meta-on” and “meta-off”. When placed in a grid with copies of itself, this pattern reproduces the Game of Life. The patterns of meta-on evolve and change just like the pattern of on do. In short, the cellular automata is modeling itself.

With this meta-system, we could go on to construct any of the patterns we can make in the basic system. The only real difference is that the meta-system is far less complex than the system that it’s being simulated or emulated in. Simulating one meta-square requires a grid of – well, I’m not sure precisely how many, but I think it’s on the order of several thousand squares.

With enough meta-squares, we could repeat this whole process: we could construct a system of meta-meta-squares out of the meta-squares. Each system would be able to do the same thing the more basic system could do – the only difference is that the memory requirements of each meta-system become greater and greater.

Given a finite number of basic squares, the meta-systems that can be created with those squares can include only a much smaller number of meta-squares.

Are you following me so far?

FranticMad:

My statements are not tautological. While perhaps less than coherent, they have clear meanings in ordinary English – meanings that your responses show you aren’t picking up on. I suspect that you are unable to consider certain types of arguments if they not couched in a particular notation.

The problem is therefore twofold: I don’t know enough mathematical notation to be able to express some of these concepts in them, and so I’m forced to resort to English; you’re so dependent on systems of notation that you can’t accept arguments not presented in them.

We’ll both just have to adapt.

Oh, and one more thing – I enjoyed the story about the Indian gentlemen and the tiger very much, but you’re missing its point.

I follow you and I am familiar with the game of life. I do not, however, see an answer to my question forthcoming. Please note that I specified a lack of an interpretive consciousness.

If you do not mean something that occurs independent of an interpretive consciousness, then please specify why you see the consequences of GIT in that which is interpreted rather than in that which is doing the interpreting.

As a side issue: are you actually intending to argue that complexity is a measure of the number of elements in a system?

Holy cow! What a thread!

It’s really late, so I’m just going to chime in with some clarification of some of the terms and concepts I see getting bandied about, then go to bed.

Turing Machine
A symbol manipulation device. Strings of symbols go in, and other strings of symbols come out. Internally, the TM consists of a set of states with rules for having the machine transition from one state to another (and what sort of output to create) depending on the current state and the symbol present at the input of the machine. A TM also has a (theoretically) infinite tape on which to store symbols as it does its thing. Certain states within the machine are called “stop” or “halt” or “accept” states, because each TM is created to recognize certain strings as valid input for it. The full set of strings accepted by a TM is known as the “language” of the machine. If a string in the machine’s language is fed to it, then the machine will halt in an accepting state some time after the point where the end of the input string is reached. If a string is fed in, and the machine stops in a non-accepting state at the end of input, it is said to crash. If, at the end of input, the TM is stuck transitioning continuously around a subset of its states without accepting or crashing (yes it can happen), it is in an infinite loop.

Universal Turing Machine
A UTM is a Turing machine with the special property that it can simulate the operation of ANY Turing Machine, even other UTMs. The computer to which you have been glued for all this time is a real-world approximation of a UTM (it lacks the infinity of tape, otherwise it’s there in the flesh). It can simulate a TM that is a word processor, or a TM that is a pocket calculator, etc. It will behave exactly as the TM would, halting, crashing or looping on various input.

Any computer program is essentially creating a TM on a UTM, and running some appropriate input through it to see what happens.

The Halting Problem and its Undecidability
There is no upper limit as to how many transitions a TM must go through before halting on (accepting) a particular input string. So if I wish to see if a machine halts on a string by feeding the string into it, how do I know whether the machine is in an infinite loop, or just needs to make 864 trillion transistions before halting on this string? I wonder if there’s a way to DECIDE, without having to run it through, whether a particular string will cause a particular TM to loop, crash, or halt? Turns out the mathematically proven answer is: NO.

Translated into real world terms, "Can I write a program that can tell absolutely by some magic means, other than tediously tracing through the code, if my other new program will never crash or hang my machine? Again: NO. This is why the Halting Problem is also known as Programmer’s Employment Insurance.

What computers and TMs have to do with Goedel
Absolutely everything. UTMs existed as a mathematical construct before anyone ever tried to build one. They were used to take strings of symbols that represented statements of number theory and use them to create other statements of number theory.

A trivial example: the fact that 3 + 5 = 8 means that I can construct a TM, put in the string 3 + 5 (or some string of symbols that represents this) and the machine will simultaneously halt on the input, and output a symbol that means “8”. I can also construct it to take ANY two numbers and give the correct sum as an output. Whether I construct it on paper and trace the state transitions with a pencil or build a pocket calculator, it works essentially the same way.

A better example: I could even build machines to do trig identities, for example: It would take sin (x) / cos (x) , and output tan(x).

Once TMs had begun to show how useful they were for generating statements of number theory, the great mathematician Hilbert called upon the world math community to use these wonderful concepts to prove once and for all that mathematics was COMPLETE (all true mathematical statements could be generated from the starting axioms by using the standard symbol replacement techniques) and CONSISTENT (by applying all applicable replacement techniques to the inital axioms and their successors, the only statements generated, known as “proveable” statements, would be in fact true statements, i.e. no falsehoods.)

Goedel screwed everything up for everybody by creating a statement of number theory that, if it were proveable, would be false, but if it were unproveable, would be true, showing that an axiomatic system could either be complete OR consistent, BUT NOT BOTH.

Good night.

** The behavior of the world does not depend on human perceptions, and it’s the behavior of the world that’s mathematical. Electrons do not need people to write equations that tell them how to move.

If by “elements” you mean relationships, yes. The number of realtionships within the system determine how much information it contains.

scotandrsn

A good definition of terms, but this statement is simply wrong. As I mentioned earlier:
[ul][li]The halting problem is a general statemetn about Turing Machines. It applies to any general attempt to test decideability of a given algorithm and input pair. Thus, it applies to computers/TM’s as a class. It has no particular implication for any specific alogorithm/TM. In relation to Hilbert, the halting problem proved that number theory is not complete+consistent+decideable. (Which was also part of the Hilbert Programme, though you failed to mention it.)[/li][li]Godel’s First Incompleteness Theorem says that no number theory that includes a specific axiomatic base can be both conplete and consistent. In relation to Hilbert, GIT proves that number theory is not complete+consistent. This places absolutely no restrictions on the behaviors of computers/TMs except for TMs that are explicitely simulating a representation of axiomatic number theory (i.e. generating statements of number theory).[/ul][/li]That’s it. GIT places absolutely no restrictions on the behavior of any program that is not generating statements of number theory, including those that add 3 + 5 and return 8.

TVAA

I agree. So present a case of “modeling” that does not require an interpretive consciousness. In what sense does “meta-life” model “life”? Does one falling rock model another?

We use mathematics to describe the behavior of the world. My statement is unambiguously true. I am not certain about yours. Do you mean to imply something other than “mathematics can be used to describe some behaviors of the world?” If so, precisely what additional implication do you see?

Correct. Nor do they fail to move in certain manners because electrons can be harnessed to encode a message that humans interpret as GIT.

No. By elements I meant elements.

You argue that “meta-life” is far less complex than the life grid on which it is instantiated. Since your point about modeling seems to hinge upon “one thing behaving exactly like another thing, at a certain level of analysis”. This strikes me as an odd usage of “complexity”. It certainly is at odds with the standard complexity theory in computer science, which would seem to be the appropriate context from which to discuss complexity in the game of life (and in TMs [or non-deterministic computers] in general).

Complexity theory in computer science addresses the resources required to solve a problem. It has nothing to do with how much information is contained within a system. The standard notation for his is the big O notation. Life and “meta life” have identical complexity under standard usage.

If your point all along has been that the elements comprising any system must be able to encode at least as much (and almost always more) information than the system itself, then I agree with you. (The proof is trivial, since any configuration of the system that encodes information is necessarily a configuration of the elements that encodes information. The degenerate case is one in which all possible configurations of elements are valid states for teh sysem.) This has nothing to do with complexity in the mathematical/comp. sci. use of the term. It also has nothing to do with GIT.

That’s not standard usage. The machine is said to reject the input, not crash.

**

This isn’t clear. Are you saying that by tracing through the code you can determine whether a given program will halt for a given input? That’s not correct.

**

Calculating sums is not the same as generating statements of number theory. Why is this so hard for people to understand?

By the way, you need to check your history. Hilbert made his call for theorems long before Godel or Turing published anything, and Godel’s paper was published ten years before Turing wrote his dissertation.

**

I already have. What I’m trying to do now is understand why you’re not realizing this. Does the algorithm for the Game of Life has a consciousness? If it doesn’t, haven’t I just presented an example of a non-conscious system modeling another? The squares don’t “know” that their collective behavior is implementing the meta-squares: they simply are.

** Mathematics is ultimately derived from our experiences of the way the world works. It’s like explaining the interatomic forces by thinking about the spring – ultimately, the explanation is self-referential.

If we found something in the world that couldn’t be described by one set of mathematical principles, it can be described by another. To be indescribable, it would have to lack all properties.

But human interpretation isn’t required. The movements of electrons are sufficiently complex that they “contain” arithmetic – as a result, GIT shows that there are configurations of electrons that can never be reached. They do fail to move in certain manners because of GIT.

Define your term, please.

It’s a mathematical fact. Now, I’m not sure about an infinite life grid emulating a meta-life grid…

No – it has everything to do with both.

These statements (along with many others you’ve made) all require proof. Please prove them, or stop asserting them.

Even if we assume the above statement is true, the succeeding statement:

does not follow, for reasons that I’ve already explained. GIT does not constrain the configurations of electrons any more than it constrains the configuration of a pencil and paper; all GIT does is constrain the meanings which we can attribute to those configurations.

A question I posed myself once: does GIT say something about math or does GIT say something about the way we think about math?

Definitely the former.