What does the Incompleteness Theorem imply?

Beg pardon? Are you suggesting that a human mind (or at least a consciousness) is required for a model of mathematics to exist? How does the brain “know what it means” if rule-governed systems aren’t sufficient?

All implementations require models. For example, the Game of Life can’t work unless the rules that govern the cells are established. It doesn’t matter if these rules are encoded in the physical structure of a computer or if they’re encoded in the human mind: the entirety of the possible patterns within the Game and their evolution is defined.

Additionally, all languages necessarily have implementations. If they weren’t, there’d be no way to use the rules of those languages to reach conclusions or generate “new” statements.

No, there are no uncrashable computers, because there’s no way to control absolutely the interaction the computer has with the external world and hence restrict the input. Computers can be made so that they can’t be crashed by a finite, pre-determined set of inputs, but no computer can ever be made that can’t be crashed.

I agree that, if we accept that we can completely control the input received (as in abstract mental/logical models of computory systems), then uncrashable systems can be made. This assumption is necessarily false in reality – it’s possible that the universe is configured in just such a way so that some particular computory system within it is never confronted with the critical input, but we can’t tell ahead of time whether that’s the case.

A human mind is necessary to understand such a model. Whether it has independent existence is one for the philosophers.

So? These models are not the same as the theories in GIT. If you don’t know what those are (and I don’t think you do), you need to go read a book. I recommend Godel’s Proof, although the author escapes me at the moment.

**

No, they don’t. We can write the specifications of a langugage, generate new statements, and prove things about it without ever implementing it. If you want to call writing the specifications on paper an implementation, well, you’re being disingenuous, IMO.

**

Your second paragraph is exactly what I, and everyone else, is talking about. Yes, real-world computers can be made to crash (but only by hardware fault in general), but who cares? None of the mathematicians or logicians, that’s for sure.

I doubt very much this is the case. Whether a physical system models arithmetic or not is a empirical matter, not merely philosophical. In any case, consciousness is not required: computers can apply semantic rules and reach conclusions in logic too.

People used to think that consciousness was required to perform arithmetic… when it’s now quite obvious that it isn’t.

GIT applies not only to the theories, but the models those theories describe.

So you’re suggesting that computational systems are incapable of modeling arithmetic? How does the human mind do it, then? Aren’t the languages in which arithmetic is described ordered by rules that can be mathematically reproduced?

But how is it being implemented in your mind? The paper is just a record-keeping device; the implementation is being done within your brain. How does the brain implement those rules?

**

Incorrect. Mathematicians and logicians can’t even imagine an uncrashable computer that accepts all input. They can only imagine devices that won’t crash for restricted and limited types of input.

It’s about now that you should be providing serious citations for your claimed “facts”.

I find the general idea quite appealing, and even believable – that for a given computer of sufficient complexity there exists a sequence of inputs that will cause it to fail catastrophically (like the record players in Godel, Escher, Bach (maybe you’ve read it?)). Could be true, might not be, but the onus is on you to provide proof of your assertions, rather than to keep on monotonously asserting them like some surreal mantra.

But, it is clear that your assertion as stated is plain wrong, I have a little device that when you click the button a counter is incremented, when it gets to 999 it rolls around to 000. It is a computer, of sorts. I’ve tried, but I can’t crash it.
To answer a question you asked me earlier --, the reason why I think that we (human beings) are not susceptible to GIT is that our brains are not formal systems of arithmetic.

Interesting. Are there a series of actions you can take with your counter to subtract? (I bet there actually are a set of steps you could take to implement subtraction [and the rest of arithmetic], although I have no idea what they’d be.)

It’s part of GIT: no sufficiently complex system can evaluate all statements. What happens when you give such a system one of the statements it can’t evaluate?

It “crashes”: it’s forced to either enter an infinite loop or a configuration is isn’t capable of sustaining (depending on the nature of the system and the statement in question).

Frantic-Goedel Incompleteness theorem
If an argment about the implications of GIT and human intelligence is sound, then someone will say something that is true but unprovable.

Frantic-Goedel-Heisenberg-Fourier-Tinkerbell Principle
You can make an accurate statement about GIT, and you can make an accurate statement about the human mind, but you can’t do both at the same time – even if you really, really wish that you could.

This is circular reasoning. You’ve assumed that physics is “fundamentally mathematical”, and then try to make other assertions based on that.

But physics and mathematics are both abstractions. Just because some people use mathematics to model physics does not mean that all of physics is “fundamentally mathematical”, whatever that means. It’s a Newtonian kind of belief, not a demonstrable fact.

So what are you suggesting physics is?

Mathematics is a language; it’s just a very specific and well-defined language. We’ve found that we can use it to describe the world quite adequately. More to the point, our concepts about math are ultimately derived from the universe we live in.

Ultimately, the only thing we can say about the world is what it appears to do, and our description will inevitably be mathematical (or expressible in a way that can be reduced to math).

How can you model physics without using a language of some kind?

I’ll tell you what: show me any system that doesn’t rely on mathematics in some way or form, and I’ll agree that my reasoning is circular.

Lovely, FranticMad!

Way to be helpful, erislover. :stuck_out_tongue:

Seriously, why does everyone have such problems with the idea that physical events are a form of computation? Haven’t they read Greg Egan’s Luminous? :smiley:

As long as we know the way bits of the world evolve, we can use them as symbols and arrange them so that their interactions result in output that represents the solution. That’s how computers work! That’s how our brain works. Even if we posit that the “true physics” is fundamentally different from the rules we currently believe it to follow, this would still be true!

Okay, now you’re just making stuff up. Please quote from the thread I gave earlier and explain exactly how this follows, and what exactly evaluation is in this context, and what it has to do with GIT (hint: nothing).

Cite? I doubt you’ll find one, cause you’re wrong.

To answer the two questions in this paragraph, it isn’t, and it doesn’t.

Bullshit. I can imagine one, that just echoes its input as its output.

Because they are not. Computation is an interpretive process. The beads just beads until a human mind interpretes them. The bits are just voltage variations until a human mind interprets them. The planets are only in motion, it takes mind to see the dance.

As ultrafilter has pointed out, you can’t just say “Godel” and set the nature of the Universe. If you want to show that GIT has some macroscopic consequences, then you need to dot your “i;s” and cross your “t’s”. You might start by defining how a physical state of the Univers can be said to be “true” or “false”. Or, as ultrafilter asked, how the Universe “evaluates” input. Or even what “input” means in relation to the Universe as a computational engine.

So far, you have offered nothing but broad statements based upon a theory you have ripped out of it’s valid context.

This statement is simply wrong. Consider arithmetic, since that is one area that we all agree GIT holds. The Godel statement has a particular Godel number. Do you think that something about that number would “crash” an engine that performed arithemtic? If so, you are wrong. I can multiply with the Godel number. I can add it to things. I can even have it appear as the result of other operations.

I just can’t prove the statement to which the number corresponds in my axiomatic system. That’s it. A consequence for the system, not for the model that the system describes.

I apologize for hammering in a point that Spiritus Mundi just made, but it seems it needs to be hammered. Going back to an earlier post:

Until a concious entity examines the output of the computer, the computer has not “reached a conclusion” any more than the proverbial tree falling in the forest has made a sound when no one is there to hear it. All a computer is capable of doing before some concious entity views its output is transiting into some physical state. The meaning of that state is only imposed by a concious observer.

Here’s another analogy: consider the physical system consisting of a pencil and paper. With such a system I can write out all of the axioms and definitions of arithmetic, derive new theorem, and whatnot. So what does GIT imply about this physical system?

Precisely this: that no matter what state the paper is in, there is no way to interpret that state in such a way that it corresponds to a valid number-theoretic proof of certain number-theoretic statements. Note that GIT does not prohibit the system from entering any particular state. It instead tells us, the people who interpret those states and give them meaning, that certain meanings cannot be attributed to any of those states.

Points:

  1. The universe, by definition, can have no input or output. Therefore its configuration cannot be interpreted as a “solution” of anything.

  2. Your views of meaning require that human consciousness be fundamentally non-computational. While this is a possibility that needs to be considered, assuming it is unjustified.

  3. Orbifold: you’re wrong about the paper-and-pencil system. If you actually are making new markings according to the rules, you’ll find that there are configurations that can be written that can never be derived from your axioms. In other words, no matter how long you slave, you’ll never write down certain patterns unless you violate the rules of the system.

You mean, like a wire that transmits electric current? Okay. I’ll send more current through the wire than it can handle. One burnt-out wire, next!

If the computational system doesn’t interact with the signal at all, it’s not there in the first place. If it carries out a series of steps to transmit the signal, it can be given an input that lies beyond what it can handle, if those steps are sufficiently complex.

Mathematics is derived from the world in which we exist. We can construct devices that represent certain interactions (such as those of arithmetic) only because the basic building blocks that make up those devices are consistent with those principles. Pocket calculators can perform addition, subtraction, and division only because the behavior of matter and energy includes and is more complex than that particular system.

Some physicist once said that, in order to imagine a universe containing only a single electron, we must first accept all of mathematics, since that’s what’s required to describe the behavior of that electron. Quantum electrodynamics requires advanced math, which can be derived from basic principles, from which all of math can be derived.

C’mon, people! We have no grounds to assume that the brain is a magical device that operates on mystical principles forever unable to be described. “Interpretation” is a function of computation, too.

I can write down any pattern I like. There’s simply no way to interpret those patterns so that they correspond to a valid proof of certain statements. Which is precisely my point: GIT imposes no constraints on the possible states of the system. It merely imposes constraints on our interpretation of those states.

No one here is denying that you can “crash” a computer with a hammer, either. But that’s hardly a consequence of GIT. If you honestly think this quip somehow supports your thesis then I don’t see any hope for this debate.

No one is assuming that, and that’s not the point besides. The point is that a universal Turing machine does not interpret the meaning of arithmetic symbols, nor is it somehow a model of arithmetic, merely because it adds and subtracts. Nor is it a model of arithmetic because it operates on mathematical principles.

You have stated that GIT implies that any computer can be crashed. But to paraphrase you for a moment, we have no grounds to assume that GIT has any implications on the physical operation of a computer at all. Arithmetic is modelled mathematically in one way; computation in another. If you want us to believe that GIT, a mathematical theorem about models of arithmetic, has any bearing on computation than you need to demonstrate a mathematical connection between those two mathematical models. So far, you haven’t done so.

Waving your arms and saying that physical computers operate under mathematical principles doesn’t cut it. If I were to claim that, say, “GIT implies the Poincare conjecture can never be proven”, I would rightfully be subjected to an enormous burden of proof. If I just said, “well the Poincare conjecture is a conjecture about topology, and topology operates on mathematical principles” I would be laughed at. Your burden of proof is at least as high.

** You’re missing my point. If you write down symbols according to specific rules (for example, using the mathematical notation currently in use to describe logic), there will always be collections of symbols you’ll never write down. Those statements aren’t the same for all sets of rules, but there will always be some.

Saying that you can write down any sets of symbols you like is true… but you’ll have to set aside the rules you were arbitrarily following. However, the system of your mind – the you that determines “what you want” – places limits on what you can write, nevertheless.

If you can write down whatever you choose, can you choose what you choose, or is that choice the result – the output – of a determinate system? If you choose what you choose, how do yo do that?

It doesn’t have to be that obvious or that violent. The point is that there will always be possible interactions with the system that the system can’t handle. Do you think that the rules that govern the input devices are somehow different than the rules that govern the hammer?

All mathematics can be reduced to basic axioms. Humans are finite, and are necessarily unable to comprehend everything implied by the axioms. This is why mathematicians are necessary. Mathematical proofs need to be worked at because we’re not smart enough to understand them instantly – if we were sufficiently intelligent, we’d regard them as being as obvious as “A equals B, B equals C, so C equals A”.

Arithmetic relies on very simple axioms – concepts like sequence. As far as I’m aware, computational systems have a slight tendency to rely on those concepts.

The “physical” and “abstract” worlds are ultimately the same – restrictions that apply to one apply to the other.

Consider this: we can use electrical pulses to represent numbers in binary notation. We can construct systems of electrical pulses – logic gates in certain combinations – that manipulate those pulses in ways that duplicate the concept of mathematical operations. Those configurations are those operations for that system – they’re the operations made manifest!

We’re used to think of symbols as being “empty”, that they really have nothing to do with what they’re supposed to symbolize. But when symbols are used to accurately represent those things, they become them – at least, to the degree that they’re accurate and complete.

A perfect model of the weather, for example, would allow absolute prediction of the weather: the model and the reality would be indistinguishable. Fortunately we can never construct such a model, but that isn’t the point.

Yes. That’s the first true thing you’ve said in a long time.

**

For the last time, we are not talking about physical realizations of systems. You can talk about that all you want, but it’s not relevant. A Turing machine can handle any input. If you want to disprove that, you absolutely must start from the definition of a Turing machine. Do you know that?

**

Yes, they do. However, that’s a far cry from saying that a computational system is a model for arithmetic. The two couldn’t be more different.

**

This is honestly one of the stupidest things I’ve ever read here. Do you really believe this, or are you just saying anything that you think will support your position?

**

No. Statement S implies statement T only if the implication is true no matter what meanings we assign to the symbols.

**

I don’t need an umbrella to protect me from computer output.

Look, you’re floundering badly. You’re making bizarre assertions on a topic that you clearly don’t know much about. What Orbifold, Spiritus Mundi, and I have said is actually true. This debate is hopeless unless you actually listen, admit that you’re wrong about some things, and learn.

** Really? Give it the Halting Problem. We’ll see how well it can handle that input. :wally

** You don’t find this obvious? Very well – I’ll start a new thread about it.

**

** When we have a language in which we can express statements, applying the grammar of the language to the statements allows us to derive consequences. Stating that something is the case is only possible with a language! Without one, how can something be asserted?

How do we define addition? By describing a set of relationships between numbers. If a circuit can manipulate two representations of numbers and return the representation of their sum, that circuit is addition! Addition is what addition does – the concept is operationally defined. Since the circuit carries out the operation…

**

** If you were being simulated along with the weather, you would. The simulated weather would interact with the simulated you in the same way that “real” weather interacts with the “real” you.