Can you do without conditional proof if you have indirect proof?

It’s a logic question. In standard propositional logic, is everything proveable by CP proveable by IP without using CP?

Maybe.

What exactly do you mean by those two terms? (And what exactly are the other rules to evaluate this question in the context of? Everything will be quite sensitive to that…)

For example:

Everything in standard propositional logic can be deduced from the following rules:

Sequence: From a string of entailments A |- B |- C |- … |- D, you can deduce A |- D. [If you like, you can reduce this to the 0-ary case (A |- A always) and the 2-ary case (from A |- B and B |- C, deduce A |- C)]
And: A |- B & C & … & D if and only if A |- B and A |- C and … and A |- D. [If you like, you can reduce this to the 0-ary case (A |- True always) and the 2-ary case (from A |- B and A |- C, deduce A |- B & C)]
Or: B v C v … v D |- A if and only if B |- A and C |- A and … and D |- A. [If you like, you can reduce this to the 0-ary case (False |- A always) and the 2-ary case (from B |- A and C |- A, deduce B v C |- A)]
Not 1: A |- C v ~B if and only if A & B |- C [If you like, you can reduce this to the case where C is the 0-ary disjunction]
Not 2: C & ~B |- A if and only if C |- A v B [If you like, you can reduce this to the case where C is the 0-ary conjunction]
Implication: “A -> B” is shorthand for “~(A & ~B)” [Of course, this convention has no effect if you have no desire to discuss the -> connective]

I don’t know if you would call any of those rules CP. If not, then in this setup, you can get by without a primitive CP rule (though, of course, a CP rule is in some sense composable from the others).

Well, I’ve found references to these definitions for the terms mentioned in the OP:

Conditional proof: By taking A as hypothesis, you are able to deduce B. Therefore A -> B

Indirect proof: By taking A as hypothesis, you are able to deduce ~A. Therefore A

I still feel that the question is unanswerable without enumerating the other deductive rules, tricks, and shorthands you are able to use along with Inductive Proof. (You can’t prove anything with IP alone, and there are certainly shorthands which would implicitly contain CP)

(Should the “Or” rule you listed have "or"s on the right hand side instead of "and"s? If not I must not be reading what you’ve said correctly.)

By “conditional proof” I mean a rule of inference which says “If some premises allow you to derive B from A, then the premises allow you to derive A–>B.”

So it looks like you’re saying you don’t need either CP or IP. I didn’t know that–for some reason I had the idea you needed at least one or the other.

It looks like you get CP (as defined above) from the “Sequence” rule plus a proof that if A entails B, then {null} entails ~(A & ~B).

What I’m doing, for absolutely no good reason other than my own entertainment, is trying to construct an extension of term logic which proves everything proven by propositional logic while still recognizeably looking like term logic (albeit with the addition of singular terms–but that’s something almost every basic logic textbook has a method for, so it would still be recognizeably “term logic”). The basic trick is to have a special set of terms as follows:

cases
cases[A]
cases** <-----Oddly, this is capital when I type it, but lower case when it is displayed in the post!
cases[C]

and so on, where the variables range over wffs in the extended term logic. Embedding further case terms inside instances of these variables allows for the expression of any propositional statement given certain basic proposition-to-extended-term-logic translation rules. Meanwhile, I’m trying to validate a set of propositional logic rules of inference using the rules of inference for term logic plus some minimal set of additional rules (for example, Unpacking: "From ‘All cases are cases’ can be derived ‘X’).

Anyway, I want to make sure I’ve validated a set of propositional rules sufficient to recover all of propositional logic. Hence, my question about CP and IP.

The set of propositional rules I’m taking to be complete (but am wondering if some can be derived from the validity of others) are:

Derive A&B from A and B
Dervive A from A&B
Derive AvB from A
Derive C from AvB, A->C, and B->C
Derive B from A->B and A
Derive A if you can derive a contradiction from -A
Derive A->B if you can derive B from A

(These might not suffice to make & and v commutative, but that is taken care of by Conversion plus the basic rules for translating propositional statements into extended term logic statements. The above rules may also not suffice to make double negation work the right way, but even if they don’t, this is taken care of by facts about how contradiction works in term logic, together with the rules for how propositional expressions are translated into extended term logic expressions.)

That’s right. That problem is corrected in my previous post.

As stated, this makes no sense, so I’ve almost certainly got it wrong. The end is probably ‘Therefore ~A’

Crazily enough, I just spent a few minutes listing the symlog propositional logic rules I remember from university, and most of these match. You’ve got:
& introduction
& elimination
v introduction (you should also specify that you can derive A v B from B, to show that it’s symmetrical)
v elimination
-> elimination

  • elimination
    -> introduction

There are rules in the list for = introduction or elimination, but you probably don’t need to use that operator as it can be expressed in terms of -> and & (or in other ways.)

To get the double negation, one thing you’ll need is the - introduction:
Derive -A if you can derive a contradiction from A

Other than that, it looks reasonably tight. If I understand right, your list includes conditional proof but not indirect proof, yes?

Without the spaces, [ B ] is the bold tag. If you happened to have a [ /B ] later, you’d have a bold section of text. Similarly, [ I ] is the italic tag, so:
cases* < Capital I as typed.
[/aside]

I see–I can use either capital or lowercase as a bold tag, which suggests the interpreter takes both capital and lowercase B inside brackets to be the same thing, and so when it displays them back within the post, it fails to distinguish between them.

What are the structural rules you are putting on top of these? (You mention Conversion, but again, others may be working in other background systems. It would be best for this conversation to leave nothing implicit.)

If I may make the background hypotheses clear, these look like the rules:
Gamma |- A & B from Gamma |- A and Gamma |- B
Gamma |- A from Gamma |- A & B
Gamma |- A v B from Gamma |- A
Gamma |- C from Gamma |- A v B, Gamma |- A -> C, and Gamma |- B -> C
Gamma |- B from Gamma |- A -> B and Gamma |- A
Gamma |- A from Gamma, ~A |- B & ~B
Gamma |- A -> B from Gamma, A |- B

You probably also have the structural rules:
Gamma |- A if A is in Gamma
Gamma |- A from Delta |- A and Gamma |- each thing in Delta

If you add the mirror image rules “Gamma |- B from Gamma |- A & B” and “Gamma |- A v B from Gamma |- B”, then the above rules are indeed complete for Boolean tautologies built from binary &, binary v, ~, and -> [though I note, to my displeasure, that you’ve left out the nullary cases of & and v…]. And you could also get by, in that case, if you replaced “Gamma |- A -> B from Gamma, A |- B” with “Gamma |- A -> B from Gamma, A, ~B |- C & ~C”, which I assume is what you mean by indirect proof. (In that case, you could recover CP as follows: Suppose Gamma, A |- B. Then Gamma, A, ~B |- B as well (by monotonicity of all the rules with respect to the hypotheses), and also Gamma, A, ~B |- ~B by the first structural rule, and so Gamma, A, ~B |- B & ~B by the first & rule, and thus, by IP, Gamma |- A -> B)

What’s that? Are you working with versions of ‘and’ and ‘or’ that aren’t binary? :confused:

I mentioned the Symlogic system a little earlier, and that doesn’t include any Gamma or the rules you mentioned. To describe a little more, a Symlogic deduction is a numbered series of propositional statements, with a justification for each statement, and possibly nested indents or similar markup to indicate unproven hypotheses.

PREMISE: Most deductions will start with several statements with the justification premise; if your deduction is a class assignment then you can’t add to these. :smiley: I think this might be the equivalent of your ‘Gamma’
RESTATEMENT: You can copy over any earlier statement to a lower line in the proof, as long as it wasn’t in a different, now-closed hypothesis. This is useful when you have to deduce something inside a hypothesis that you’ve already deduced outside it.

The following rules work without hypotheses. Where I say ‘Given’, the components mentioned must be earlier statements in the deduction, not in a now-closed hypothesis.
& Introduction:
given
A
B
deduce
A & B

& Elimination:
given
A & B
deduce
A

given
A & B
deduce
B

v Introduction:
given
A
deduce
A v B (B can be any possible statement)

given
B
deduce
A v B (A can be any possible statement)

-> Elimination:
given
A
A -> B
deduce
B

= Elimination:
given
A
A = B
deduce
B
given
B
A = B
deduce
A

The other rules require you to form hypotheses and deduce within them:
-> Introduction:
Start a new hypothesis and write A at the start with ‘hypothesis’ as the justification. Within this hypothesis, you can use all of the same deductive rules, with A now available in addition to everything outside the hypothesis. Deduce B within the hypothesis. This justifies deducing A -> B outside the hypothesis.

  • Introduction:
    Start a new hypothesis for A. Within the hypothesis, deduce some statement C, and also -C. This justifies deducing -A outside the hypothesis.

  • Elimination:
    Like ‘- Introduction’, except you take -A as your hypothesis to deduce A. You still need to deduce C and -C within the hypothesis.

= Introduction:
Like ‘-> Introduction’, except you need two hypotheses: One hypothesizing A to deduce B, and the other hypothesizing B to deduce A. This justifies deducing A = B outside both hypotheses.

v Elimination:
The other double-hypothesis rule. Given A v B, start two hypotheses, one hypothesizing A, the other hypothesizing B. Deduce C in both hypotheses, in order to deduce it outside.

This is also logically complete I believe, though some intuitively obvious proofs were awkward to construct as I recall. In the same course, we also used an expanded system with substitution rules built in, substituting -A & -B for -(A v B) for example - and simple predicate logic involving rules for the ‘All x’ and ‘Exists x’ operators.

The Gamma is the equivalent of indenting with new hypotheses. It keeps track of all the hypotheses which have been introduced in the current context. (And all the premises, yes. There’s really no difference between the two.)

And of course I’m working with versions of & and v that aren’t exclusively binary. We all are, except when we’ve chosen to ignore it. & and v are n-ary in general, for any n. You can build the results for all finite positive n from just the 2-ary versions, but there’s still the 0-ary cases: the 0-ary case of & is the constant True, and the 0-ary case of v is the constant False. (I should say, having the constant False around also makes the idea of “contradiction” cleaner to express than “C & ~C, for some arbitrary, irrelevant C”).

See below, but at this point in the post, I’ll just say that the mirror image rules you mentioned do turn out to be valid in the system I’m working on (I’ll explain why below). Regarding the two structural rules you described:

Are they necessary? Regarding the first one (which looks like what I was taught as “repetition,”) it seems like if A is in Gamma, then by supposing -A, I can derive a contradiction, thereby proving A. Regarding the second one (which doesn’t match anything I was taught but which is of course recognizeably valid) it seems like, again, if I suppose -A, I’ll be able to derive a contradiction by deriving A from each thing in Gamma, and I will thereby have proven A.

I will be happy to learn what this means. :wink:

What I meant by indirect proof wouldn’t always yield a conditional as a conclusion as the abovequoted rule does. But maybe the above rule together with modus ponens ends up letting you extract B out as a conclusion in all the appropriate cases? I unfortunately don’t have the time to think it through right at this second.

Anyway, hereinafter follows the part I referred to above when I say “see below.” I’m going to set forth the modified term logic I was talking about before, so you can see how the mirror image rules are validated in it.

[QUOTE=Case Term Logic]

WFFS
The wffs are just the four categorical statement forms we all learn in our baby logic or critical thinking class, except I put parentheses around the strings usually taken to represent categories, for reasons having to do with disambiguation. Here are four schemata for wffs:

all (x) are (y)
no (x) are (y)
some (x) are (y)
some (x) are not (y)

I refer to these, in order, as A, E, I and O statements just as is usually done. The variables x and y here range over strings of characters. (Typically letters, numbers, brackets, parentheses and numbers.) Almost any string is permissible, with one stricture: The string can contain parentheses only if the parentheses fall inside a string of the form ‘cases[wff]’ (where that ‘wff’ stands for any well formed formula).

The rules of inference are as follows, in schematic form:

The Four Syllogisms:
S1. From ‘all (A) are (B)’ and ‘all (B) are (C)’, you can derive ‘all (A) are (C)’
S2. From ‘some (A) are (B)’ and ‘all (B) are (C)’, you can derive ‘some (A) are (C)’
S3. From ‘all (A) are (B)’ and ‘no (B) are (C)’, you can derive ‘no (A) are (C)’
S4. From ‘some (A) are (B)’ and ‘no (B) are (C)’, you can derive ‘some (A) are not (C)’

The Simple Inferences
Conversion: From ‘[some/no] (A) are (B)’, you can derive ‘[some/no] (B) are (A)’
Contraposition: From ‘[All/Some] (A) [are/are not] (B)’, you can derive ‘[all/some] (non-A) [are/are not] (non-B)’
Obversion: From ‘[all/no/some/some] (A) [are/are/are/are not] (B)’, you can derive ‘[no/all/some/some] (A) [are/are/are not/are] (non-B)’.

(Above, ‘(non-X)’ means an open paren, followed by the string ‘non-’, followed by the string represented by X, followed by a close paren.)

The Extended Rules of Inference

Packing/Unpacking: From “All cases are cases” you can derive “X” and vice versa.

Case Universalization/Particularization: From “Some cases are cases” you can derive “All cases are cases” and vice versa.

Case Existence: From “some cases {are/are-not} cases[Y]” you can derive “some cases are cases

Proof by Contradictories: If X and not-X can be derived from Y and the premises of an argument, then not-Y can be derived from the premises of the argument.

Proof by Case Supposition: If “All cases are cases[Y]” can be derived from “All cases are cases” together with an argument’s premises, then “All cases are cases[Y]” can be derived from the argument’s premises.

Rules of Replacement

Not to Non replacement: “cases[not-X]” can be replaced with “non-cases” and vice versa

Not to Con Replacement: “not-X” can be replaced with the contradictory of X, and vice versa, according to the following:

The contradictory of ‘all (x) are (y)’ is ‘some (x) are not (y)’ and vice versa.
The contradictory of ‘some (x) are (y)’ is ‘no (x) are (y)’ and vice versa.
Interpretation of Propositional Strings as Strings of Case Term Logic

Any simple proposition can be imported directly into case term logic, using the methods for translating natural language into statements of categorical logic found in any basic logic or critical thinking text book. Singular terms should be translated using the technique that turns them into categories that consist of a single member. For example, Socrates is treated as being the sole member of the category “things identical to Socrates”.

As for compound propositions, their interpretation into case term logic is given below. I use ‘=>’ to mean the string to the left, in propositional logic, can be interpreted as the string on the right in case term logic.

‘X & Y’ => ‘Some cases are cases[Y]’
‘X –> Y’ => ‘All cases are cases[Y]’
‘X v Y’ => ‘All cases[not-X] are cases[Y]’
‘-X’ => ‘not-X’

These interpretational schemes should be applied to the compound proposition, then applied to any compound propositions that remain inside strings of the form ‘case[Z]’, then applied again to any remaining such strings, and so on until there remain no more compound propositions. For example:

(P & Q) & Y
=> some cases[P & Q] are cases [Y]
=> some cases[some cases[P] are cases[Q]] are cases[Y]

[/quote]

Now, as for how the mirror image rules turn out to be validated, first here is the demonstration that, as interpreted, A&B /- A is valid:

  1. Some cases are cases[Y] (premise)
  2. Some cases are cases (Case Existence, from 1)
  3. All cases are cases (Case Universalization, from 2)
  4. X (Unpacking from 3)

To get the mirror (A&B /- B), you can use a tweaked version of the above:

  1. Some cases are cases[Y] (premise)
    1.5 Some cases[Y] are cases (Conversion, from 1)
  2. Some cases are cases (Case Existence, from 1.5)
  3. All cases are cases (Case Universalization)
  4. X (Unpacking from 3)

This is what I meant when I said you get the mirror image rules by conversion together with the rules for interpreting propositional logic statements into this new system.

So anyway, the above is pretty messy but there it is. I don’t know for sure that all of the extended ruleset is necessary to validate the PL rules I listed in a previous post, but they are definitely sufficient. I also haven’t shown to my own satisfaction that the extended ruleset is consistent, but intuitively it certainly looks that way. (To me…)

The resulting system proves everything in propositinal logic and more, but of course not as much as predicate logic.

To be clear, in case there’s any misunderstanding: I don’t take any of the above to be objectively interesting or important. It’s just a little game I’m playing with myself. It started with me wondering if there was some way to tweak categorical logic to make it more powerful but still tractable to beginning students who are not going into philosophy, so that it would be easier to use logic on “real arguments.” Things got out of hand really fast.

Oh. So, just to make sure I understand correctly your underlying question isn’t really about CP and IP, or even about the rules in post #6? If I understand correctly, your question is actually whether the rules in post #14 are complete for Boolean propositional logic?

In the particular system of rules I gave, yes, the first one is necessary… How do you propose to derive a contradiction after supposing ~A on top of a context that has already supposed A? I.e., how do you propose to derive A, ~A |- a contradiction? Presumably the contradiction you mean to derive is A & ~A; you presumably mean to derive A, ~A |- A & ~A from A, ~A |- A and A, ~A |- ~A. But these latter two, you will only have because of the rule that Gamma |- anything that’s in Gamma.

I’m sure the second rule was implicit in the system you were taught, if not explicit. (Probably in the form “Gamma |- A from Gamma |- B and Gamma, B |- A” instead, but if you’re not explicitly writing out the context of hypotheses, all these rules would go unnoticed). It says that anything you derive from some hypothetical context, you can use en route to further derivations from the same context. In the system I outlined, it would be necessary for showing that, say, (A & B) & C |- A; first you would get (A & B) & C |- A & B and A & B |- A, then you would combine those. Again, you can’t just use contradiction: How do you get a contradiction from (A & B) & C, ~A? The contradiction you want is A & ~A, and the way you want to get A & ~A is to get A and to get ~A, and your path to getting A is through the chain (A & B) & C, ~A |- (A & B) & C |- A & B |- A, and combining that chain together to get (A & B) & C |- A is precisely what will call for this rule.

Actually, I’m not sure anymore, since the rules of post #14 are apparently supposed to include the structural rules which were implicitly to be part of the system you were outlining in post #6… Is your question about… combining the rules of post #6 and post #14? I’m confused again.

(For what it’s worth, none of the logic books or classes I was weaned on in my formative years bothered with Aristotelian term logic… I don’t think it’s really a ubiquitous part of modern logic or critical thinking education (though I understand that you’re just interested in this game of seeing what can be done with some augmentation of it).)

No, my question really was about CP and IP. The information in the later post was to give the background as to why I was asking the original question, and to satisfy a request that nothing be left implicit. Basically, I was trying to figure out whether I needed to make sure that CP is validated as interpreted in the system described in the above post. (At the time I asked this question, I was not sure whether the new system needed to have the “proof by supposition” rule it currently has.)

My original question is, to put it more clearly than I originally did:

Concerning the following list of rules

does the final rule allow me to prove anything that I couldn’t prove if that rule weren’t in the list?

And just to be super-duper clear, by “derive a contradiction”, you mean “derive a statement of the form P & ~P”?

Then yes. For example, the final rule is necessary in order to derive A -> A from nothing.

Proof: Consider the interpretation of &, v, and ~ as usual, but of “A -> B” as Falsehood, regardless of A or B. Then all the rules except the last one take you from tautologies to tautologies, but A -> A will not be a tautology. (It would also work to interpret “A -> B” as meaning “B”, regardless of A)

I get it now. I was thinking erroneously something like this:

  1. A (premise)
  2. -A (premise)
  3. A & -A (from 1 and 2, by a rule X, Y |- X&Y)

and I had confused the rule named in line 3 with your rule Gamma |- A & B from Gamma |- A and Gamma |- B