Yes: Once you know that P AND (Q AND R) = (P AND Q) AND R, the parentheses are no longer necessary, since P AND Q AND R has the same value regardless of how you interpret/evaluate it.
I hope that answers your question; I’m not sure whether I understood it correctly.
I would answer this as follows. The AND connective (let me write it as &) is defined as a binary connective. Therefore, at the beginning, P & Q & R is undefined. However, once you have that (P & Q) & R = P & (Q & R), it is customary to extend the binary operation to a ternary, quarternary, …, n-ary operation for all finite n. This cannot, of course, be done for non-associative operations. For example, if x stands for the vector cross product in three dimensions, u x v x w is meaningless. You must specify (u x v) x w or u x (v x w).
The statement “P AND Q AND R” is operationally ambiguous. You don’t know how to compute the value.
The statements “P AND (Q AND R)” and separately “(P AND Q) AND R” are each operationally unambiguous. You *do *know how to compute their values.
Conveniently the associative principle applies and therefore we know
P AND (Q AND R) = (P AND Q) AND R
Which implies that when working with “P AND Q AND R” the *answer *is well-defined, but the *process to get to the answer *is not. That is a dangerously sloppy foundation to build math on.
The typical solution is we define a rule which says “inadequately parenthesized expressions are evaluated in this <blah blah … blah> order of operator precedences, and between operators of equal precedence in [left-to-right | right-to-left] order.”
Once you’ve decided what rule you or your book is using, then, and only then, can you compute the value of any expression. Before that you’re just guessing.
Sometimes, as with idealized realizations of simple associative operators, all the possible guesses turn out to be the same answer which also happens to be the right answer. But that’s luck, not skill. IOW, don’t count on it.
We just did a thread I can’t now quickly find on some stupid arithmetic “puzzle” making the rounds of Facebook which mixes + - * / with no parenthesis. All the controversy about the correct answer comes from folks ignoring, forgetting, or never having learned the ambiguity resolution rule I mentioned above.
The answer is undefined when there’s no agreement on what the rule is. Anybody with any arithmetic skills knows what rule *ought to be *applied. All the problems arise when *ought to be *just isn’t.
With respect and apologies, I think all the answers given so far are either wrong or unhelpful.
OP, for your purposes the answer is “no, because in the version of logic you’re learning, ‘P AND Q AND R’ is not a valid expression.” It can’t be equal to the other two expressions–because it can’t be equal to anything, because in the language of propositional logic (that you’re learning in your class) it doesn’t mean anything.
(I am making assumptions about what the predicate logic you’re learning is like, but based on my experience with lots of textbooks etc, I think the assumption is really safe.)
There are definitely ways, later on, to formalize the idea of, as was mentioned above, a tertiary (and 4-ary and so on) conjunction. But in your course, you’re learning the binary one. And with the binary one, you have to have those parentheses.
On Edit: I missed LSLGuy’s response above. He’s got the right idea IMO.
Well, some editions of that book do say so explicitly (from page 21):
Strictly speaking, P ∧ (Q ∧ R) usually won’t be a well formed formula, because it is a conjunction that lacks outer parenthesis. The convention is, I guess, “drop parenthesis when doing so is unambiguous, or if you are in a class, don’t drop them, unless they are the outer-parenthesis or we tell you that you can”. I don’t think Velleman actually defines well formed formula, in any case.
Probably because Velleman isn’t presenting an in-depth study of formal logic for its own sake, being more concerned for its use in mathematical reasoning and proofs (I assume, not having read that particular book.)
To be sure, everything said in all the above posts about the use of parentheses and order-of-operations in propositional logic, happen to apply exactly the same way to ordinary arithmetic, where the operations of addition and multiplication are involved. Thus, for example, + and * (using * for multiply) are binary operators, and a+b+c or abc are well-defined only by the conventions we have about when parentheses can be omitted and implied. Even the above remark about the custom of omitting the outermost parentheses applies similarly.
I like to call this the “crosswalk rule”. In California law (and maybe other states), there is an explicit law that a crosswalk exists, whether marked or not, wherever two streets meet at right angles. Thus, the explicit crosswalk markings may be omitted at such intersections, but pedestrians are drivers are expected to know that there is a crosswalk there.
Similarly, in an expression like 3a[sup]2[/sup] + a + 7, the student should understand that every term has a coefficient even if there is no coefficient written. We have the understanding that a coefficient of 1 may be left implied, but there is still a coefficient there!
The same with parentheses. We should have an understanding that, in an expression like a+b+c, the parentheses ((a+b)+c) are there even if they aren’t written. Our convention allows us to leave them implied in certain cases, but the parentheses are there whether you see them or not.
@D18: Once you get through the associative laws for ∧ and ∨ do you then discuss the distributive laws? (Or do you jump straight into De Morgan’s various Rules?)
I wondered once how far you could go with the analogies between conventional arithmetic and propositional logic. In arithmetic, we have commutative and associative laws for both addition and multiplication. In propositional logic, we have commutative and associative laws for both ∧ and ∨
In arithmetic, we then have a distributive law of multiplication over addition which tells us:
a * ( b + c ) = ( a * b ) + ( a * c )
Is there a similar distributive law in propositional logic? I don’t remember ever seeing such a thing mentioned in any class or textbook.
There is some tendency to see disjunction ( ∨ ) as analogous to addition, and conjunction ( ∧ ) as analogous to multiplication. So is conjunction distributive over disjunction? I.e., is
a ∧ ( b ∨ c ) = ( a ∧ b ) ∨ ( a ∧ c )
So I played with it a bit, and figured it out, and discovered the surprise answer. See if you can work it our for yourself!
[spoiler]To my surprise, I found that conjunction and disjunction are each distributive over the other (suitable emoticon not found)! Thus,
a ∧ ( b ∨ c ) = ( a ∧ b ) ∨ ( a ∧ c )
and also
a ∨ ( b ∧ c ) = ( a ∨ b ) ∧ ( a ∨ c )
Interesting discussion, guys! Thank you. Sorry I didn’t get back to the thread till now.
TATG :smack: As Thudlow said, good catch! Apologies for asking a question I could have answered with some more careful reading, but hopefully y’all enjoyed weighing in on the topic regardless.
The specific context that lead to the question is I got a line like:
(x ∈A ∧ x ∈B) ∧ (x ∈A ∧ x ∈C)
So I wasn’t clear on whether I could reduce that to:
(x ∈A ∧ x ∈B ∧ x ∈C) using the associative law and the idempotent law. So the answer is yes, I think!
Senegoid, yes, Velleman does invoke the distributive law for logical connectives, and one of the exercises is to demonstrate its validity using truth tables and presents it in the way you did. I’ll have to try to figure out how it is derived, as you suggested.
I’m not sure there is any such thing as “how to derive” such a fundamental law, other than simply using truth tables to enumerate all the cases. In arithmetic, the various commutative, associative, and distributive laws are fundamental axioms, meaning that they are not derived from more fundamental rules. I think the same must be true for the commutative, associative, and distributive laws in propositional logic too.
ETA: What about De Morgan’s various Rules? Are they fundamental (that is, derivable only by using truth tables to enumerate all possible cases), or are they algebraically derivable from the more fundamental rules?
I don’t think that is enough (unless you play fast and loose with the laws). You could do it if you use the commutative law.
One can derive them from other fundamental laws. (Well, we haven’t said what a fundamental law is. But if you want all the laws of classical propositional logic, you only need a finite set of laws. And there will be many such sets (and sets that are non-redundant in the sense that there have no unnecessary laws.))
One can derive associativity (of either multiplication or addition) from a few axioms of PA (none of which are associativity, of course).
This is a different sense of fundamental. On this definition I’d guess either every tautology is fundamental, or none is.