A proof of the existence of God

now with repaired quoting :

The problem is that we implicitly assume, that our informal rules of logic (I’m a mathematician, I can call them informal.)
apply to every sentence of natural language. But some of these sentences are just pathological.
The only way out of this, is restricting our language, when we want a consistent logic.

Not at all. Godel’s Theorem belong to a strictly formalized logic with a formal language that does not allow these kind of sentences.
And Godel’s Theorem definitely doesn’t produce paradoxa, inconsisties, proofs of god, etc.

waít a second… When would we get infinite regression?
What we get is infinitely many levels of meta languages.
And every sentence belongs to exactly one of them.

This may be uncomfortable, but surely not as bad as logical inconsistencies.

Anyway: restrict the language you must.

Ah-ha. There’s definitely a flaw:

Let T be the statement “If T is true, then TMcA’s argument contains a flaw”

:smiley:

Seriously, it… looks pretty bloddy convincing. I’m betting on being Godel’s thingy being the problem (ie. there’s no precise way of stating S), since replacing “God exists” with “S is false” produces a paradox.

Just in case, I’ll quickly prove I’m rich…

Ah-ha. There’s definitely a flaw:

Let T be the statement “If T is true, then TMcA’s argument contains a flaw”

:smiley:

Seriously, it… looks pretty bloddy convincing. I’m betting on being Godel’s thingy being the problem (ie. there’s no precise way of stating S), since replacing “God exists” with “S is false” produces a paradox.

Just in case, I’ll quickly prove I’m rich…

(Whoops, double post, simulpost…)

Probably yes, but…

I can’t remember quite how he did it, but didn’t Godel find a way of expressing S=“S is unprovable in X” in a formal system X? Why can’t we replace “is unprovable by x” which is an very complicated by “imples G” for some statement G in X?

I think that the major problem with this arguement is the way it is setup. Namely this:

“Let S be the sentence…”

The arguement sets up initial conditions that have nothing to do with reality. For example lets replace S with E=mc2 and tweak the arguement a bit.

Let E=mc2 be the sentence

If E=mc2 is true then we can go faster than the speed of light.

Wow, I just found out how to break the speed of light! Where is my Nobel prize? I WANT MY PRIZE!

Or another variation.

Let Tyrrell McAllister be the sentence

If Tyrrell McAllister is true then the logic of the OP doesn’t work

In other words, logic with out a reference point is simply worthless. A good set of definitions also helps.

Slee

jetz is correct. The problem comes from using a non-rigorous language to express logical propositions. Russell, tarski, et al. dealt with these issues in depth. Allowing a propositions to reference itself allows for these types of paradoxes. Godel’s first Incompleteness Theorem does not apply. For one thing, allowing such self-referential propositions makes the logic resulting from said language inconsistent (in this case proving both B and B’ for all B) and Godel’s Theorem applies only to consistent systems.

jetz is not correct.

However, if jeltz is correct, then my typing is attrocious.

jeltz is right. This is an empty self-referential sentence. It’s a totally closed loop, not a proof. Essentially, this sentence in standard English simply reads:

If this sentence is right, then this sentence is right.

Paradoxes like “this sentence is false” are at least interesting. This is just an empty meaningless phrase. Are we provided with any criteria to judge the TRUTH VALUE of S? No. So what is so curious about this S? IMO, zilch. S is neither true nor false. But not because S being TRUE would imply just the opposite (i.e. being FALSE).

The complexity of the S can be formally reduced to the assumed PREMISE itself that it attempts to prove: “God exists”. Still, we haven’t proven that God exists because it didn’t logically follow, ensue from any previous statements. It’s not even non sequitur because there’s no logical sequence! It’s still up to us to decide whether “God exists” is TRUE or FALSE.

Summary:

S = S => GOD
S = GOD

…and not…

S = S => GOD
S => GOD
Let’s not get the EQUALS with the IMPLIES mixed up!!!

I haven’t studied formal logic at a level where Tarski’s and Godel’s works are rigorously discussed. In modern logic, are self-referential sentences just not allowed, or have consistent schemes been developed for studying some self-referential sentences. In other words, do we have to disallow all of them, or can we just be very conservative about which ones we do allow, the same way that the Zermelo-Frankel axioms get around set paradoxes by just being very careful about what we allow to be a set? (I wonder if this thread should be moved to GQ?)

Why do you say that? If you admit to S being a sentence at all (which jeltz and Spiritus Mundi evidently do not), then it must be true. december and I have provided two seperate proofs of this. In fact, the sentence you provide,

[ul]If this sentence is right, then this sentence is right.[/ul]
is also true. For, to use december’s reasoning, suppose that it is false. Then it is a conditional with a false antecedent, and therefore true. Unless a contradiction can be shown to follow from its truth, we must conclude that it is true (provided that we accept that it is a sentence).

How did we ever do this? I think that a more accurate summery is as follows:

Let S = S --> B.

Lemma: S.
Proof: Suppose S. Since S = S --> B, this entails that S --> B. From this, together with the hypothesis, it follows that S & S–> B. Hence, B.
Thus, by assuming S, we derived B, so S --> B. That is, S. QED.

Theorem: B.
Proof: By the lemma, S. Therefore, since S = S --> B, we have that S --> B. Thus, since S and S --> B both hold, we have B. QED.

Where in this are equivalence and implication confused?

Oh, hey! Happy fiftieth post to me :).

Actually, ZF set theory includes Russell’s idea of types in teh Axiom of Foundation.

there are many forms of modern logic, and not all of them exclude self-reference in all forms (Quine’s NF set theory comes to mind), but I think it is fair to say that most logicians feel that a structural language which allows for self-referential propositions is unlikely to yield a consistent system. And most logicians really like consistency in their systems.

Infinite regression, infinitely many levels… you say Godel, I say Gödel…

It’s not particularly a problem for a mathematical or logical language, but Tarski was trying to articulate the logic behind truth in natural languages. In trying to avoid such a paradox, he ended up positing an infinite number of levels to natural language, when he should have just said that natural languages are messy and allow such paradoxical statements to occur.

Here’s the problem. You’re using (semi) formal logic on a sentence which is not well formed:

A well formed sentence is any which satisfies the following:

[ul]
[li]The ‘atoms’ of your language. Basic ones which can’t be expressed in terms of others. e.g. “God exists”[/li][li]If A is a well formed sentence then not A is a well formed sentence.[/li][li]If A and B are well formed sentences then A => B (A implies B) is a well formed sentence.[/li][/ul]

Note that you can only apply these constructions a finite number of times.
And, Or, iff are all constructed from implication and not (e.g. A or B = Not B => A) , so they’re all well formed.

You’ve got an atomic sentence T, and wish to construct the string S = (S => T).

Assume the string is well-formed, so it consists of finitely many logical symbols. Lets take a look at the number of logical symbols used in each. Say there are n symbols in S, then the left hand side of the equality has n symbols, the right hand side has n+2 (or n+4 if you count the brackets, but they’re just there for clarity). So your string isn’t well-formed.

You’re trying to apply rules of a particular type of logic outside of where they’re valid. Sure, if you want self reference you can use a form of logic where it’s ok to do so, but first order logic isn’t one.

The problem is that S isn’t a construct where there’s a separate component A and B. S is equivalent to the entire construct itself, including GOD EXISTS. Therefore to phenomenologically judge it from the perspective where an antecedent yields some given value would be erroneous. When you set the TRUTH VALUE of S, you set the TRUTH VALUE for S as a whole, not some given A with implication B. You can not elegantly separate S from itself in thought space. So if S is FALSE, the entire construct is FALSE!

This is why I say that the complexity of S can be reduced not to a conclusion, but to the given STATEMENT “God exists”.

So when S is TRUE, “God exists” is true. When S is false “God exists” is false. It’s not that S isn’t a sentence. It’s just that it’s an overly redundant sentence. You’re misinterpreting my version “If this sentence is right, then this sentence is right” because you switch the meaning of “this” to mean first one thing than another . THIS refers to the entire sentence itself (just like our formally defined S).

Ergo:

S <–> S

No, if you want to use first order logic it actually isn’t a sentence. You see, the problem is that people take shortcuts. Most of the time this is a good thing - It gets damned inconvenient to spell everything out in terms of implications, negations and atomic variables, so you instead write letters to represent a sentence, else proofs could involve ridiculously long strings of characters. Unfortunately it leads to confusion. Here the problem is that trying to insert a finite sentence inside itself, which can’t be done.

S <=>( S=> (S => B)) may be a sentence (S if and only if S =>( S =>B)) but it’s not the same thing. I don’t feel like struggling through it at the moment to see what that would give, but I think it would reduce down to S <=> (S=>B), which is what ethnicallynot is saying. The point is though, that two strings being equal isn’t the same as their statements being equivalent.

Since the term is recursive, let’s try expanding it out to various levels and see what happens. (My logic may be flawed in the following steps - please watch it carefully for me!)

To get a clear look at this, let’s call the overall statement S’, and then only consider the lines of the truth tables of the expanded statement where S has the same truth value as S’.

Level 0:
S’ = S -> G

Truth Table for Level 0:
S G S’

T T T
T F F
F T T
F F T

At level 0, it appears that G must be true, since the only line where S <=> S’ is one where G is true.

Level 1:
S’ = (S -> G) -> G

Truth Table for Level 1:
S G S’

T T T
T F T
F T T
F F F

At level 1, it appears that G can be either true or false, since there are lines in the truth table where S <=> S’ where G is true, and others where G is false.

Level 2:
S’ = ((S -> G) -> G) -> G

Truth Table for Level 2:
S G S’

T T T
T F F
F T T
F F T

It appears that at level 2 G can only be true, since that is the only line in the truth table where S’ <=> S.

If we continue doing this, we find that it alternates between G having to be true and G not having to be true. I’m not entirely sure what to make of this, but I have some suggestions.

We can say that since all of these are valid deductions from the initial statement, then if we allow S’ to be equivalent to S we are creating a statement with some form of contradiction. This contradiction doesn’t appear to be at the propositional level - that is, we aren’t deriving G to be T & F. The contradiction appears to be at the modal level. That is, this proposition (if we want to call it that) is both necessarily true and not necessarily true.

How logicians deal with this sort of thing I have no idea. I suspect they would treat it like any other self-contradictory statement.

Another way of looking at it is to say that we must expand it infinitely in order to determine whether G is necessarily true or not. This leads to the question, “Would an infinite number of expansions be odd or even?” This appears to me to be about the same as asking how many angels can disco on the head of a pin. :slight_smile:

I think the problem is infinite regression. When you say that:

(S is true) => (God Exists)

where S is the statement:

(S is true) => (God exists)

You get the statement that:

(((S is true) => (God exists)) is true) => (God exists)

You then try to determine the validity of the part in red, and assume the antecedent of it, i.e. that:

(S is true)

Where S is the statement:

(S is true) => (God exists)

This statement does not imply that:

(God exists)

Unless you assume that:

(S is true) <=> (S is true)

Which I don’t think is valid. Sure, the rule of assuming the antecedent does not specifically prohibit it, but I think that is due to the rule not being made for self-referential statements. When you test a statement (A => B) by assuming A, it is generally taken for granted that the truth of A is irrelevant to the original statement. As a matter of fact, the rule of assuming the antecedent exists precisely because A can be assumed to be either true or false without changing its relationship to B.

I think it is simply a classic equivocation that reduces to an identity. A proposition of the form “if A, then B” may also be expressed as “A implies B”.

So, let C = “God exists”. Then S = (S => C).

I cant remember my philosophy studies right now, but a little voice at the back of my mind is shouting “Necessary” and “Contingent” and “you can’t prove one from the other” but quite what that means I dont know.

Am I barking up the right tree?