Something’s being a wave does not imply its not being a particle. Quantum Physics doesn’t give us any reason to think LNC is false.
It’s true that there are robust formal systems of logic in which LNC isn’t true. I’m not sure why you think the Liar shows that logic isn’t bivalent. On your account, isn’t it simply up to us whether our logic is bivalent or not?
Which doesn’t mean, all by itself, that there are truth values other than True and False. It might be that “This sentence is a lie” is simply not truth-evaluable. (I’m not telling you what I think, here, I’m just explaining why it’s not a slam-dunk that “logic isn’t bivalent” just becaue there exist sentences like the Liar.)
I’m not supposing that.
I’m asking whether the universe conforms to LNC. I’m not asking about any reasons why it might do so.
See previous comment. I’m wasn’t asking about why the universe conforms to LNC, if it does. Rather, I simply asked whether it does.
Every time I ask myself about my observations whether they contradict each other, the answer is no. This isn’t too surprising about observations involving medium sized objects on Earth. It’s more surprising about observations involving objects on other scales in distant locations.
Its as though every extra-terrestrial sentient species also played Chess. Or as though quantum particles were found to follow rules formally equivalent to that of Chess.
If LNC is just a feature of my own psychology, it’s surprising to me that it applies universally–if it applies universally–unless it’s a feature of my psychology because it applies universally (and hence, locally as well).
I was wondering about something like the following.
It is a feature of a measurement instrument that at any given time, with respect to a single measurement capacity, the instrument is yielding a single measured value. (Otherwise it’s failed to measure anything.) I am an organism, with several measurement instruments onboard. Some of them are quite complex, involving the formation and use of patterns of activity we might call “concepts” and “conceptual judgments.” Because the use of these patterns is a way of yielding measurements about the world, at any given time, any given one of these “instruments” can yield only a single value. But this precludes the possibility of conflicting measurements. That includes a preclusion of the possibility of conflicting judgments (since judgments are a kind of measurement). For if I make conflicting judgments, I’ve failed thereby to measure anything, and whatever is going on in my head is therefore useless for the development of any kind of action in the world.
In point of fact, we can make conflicting judgments. But when we do so, we become bad measurers, and less effective actors. So it is in our interest (even our biological interest, hence natural selection will have something to “say” here about how we actually end up behaving w.r.t. conflicting measurements) to avoid conflicting judgments.
It’s in our interest, not because the world obeys LNC, but rather, because if our measurements conflict, we can not use them as measurements, because if our measurements conflict, we have no measurements at all.
That’s too fast, but maybe something like this is something that could be developed.
??
So what is light?
Yes it does - fuzzy logic isn’t bivalent, since it is probabilistic.
So LNC is not universal. If that’s the case, then in what sense could it govern the universe?
Also the liars paradox shows that logic isn’t bivalent because it’s neither true or false. In bivalent logic, those are the only choices you have.
I don’t understand what you mean by ‘my account’.
I’m not saying it’s a slam dunk, but it does seem to be circling around the rim.
So if it’s not truth-evaluable, then what does that mean in relation to the idea that the LNC is governed by the universe?
Seems to me that’s the same question; I’m asking what you mean by ‘conforms’. If you mean that we observe that things are what they are, then I agree with you.
The idea that things are what they are is the law of identity, which is a language law that helps us communicate. It is not a law of the universe.
Well, let me ask you this, if the universe didn’t exist, would the LNC still exist?
I contend it wouldn’t, since there would be minds to house it in. What do you think?
Well, if they didn’t then they wouldn’t make sense - I don’t see how you could ask yourself that, as in, how it would be coherent.
What you are actually asking (it seems to me) is why are things the way they are - we do we observe things acting according to their nature.
But it doesn’t apply universally - light is an example. I would contend that the liar’s paradox is another. Further, quantum physics is a third, which relies on fuzzy logic - probabilities, not definite positions and such, which the LNC would require.
I would agree with this, as in, it is in our interests to accept the foundational laws of logic, since doing so is pragmatically justified.
On of my math professors in graduate school was famous for having developed KK-theory, which as best anyone could tell was not useful for anything and could only be understood by five or ten people on the whole planet. I once asked a professor about the uses of cobordism theory and could only get a vague answer about it being useful for other branches of mathematics. Similarly, our department chair worked on group-theoretic approaches to the homology of manifold boundaries. I never quite figured out what that was supposed to be good for either.
Ultimately it’s possible that an expert would be able to build a tenuous link between each of these things and some branch of the sciences, but I’d still take that with a grain of salt. Just because abracadabra theory is useful for hocus pocus theory and hocus pocus theory is useful for banzai theory and banzai theory is useful for physics, it doesn’t make abracadabra theory actually useful for physics.
Light has both wavelike properties and particle-like properties.
How is the existence of something with both A-like properties and B-like properties supposed to show that LNC is not valid?
If something is a bird, it’s not a lizard. But Archaeoptyrix has both birdlike properties and lizardlike properties. Does this show LNC is not valid?
Can you explain the difference to me between using fuzzy logic and doing probabilistic math? Doing probabilistic math doesn’t involve a rejection of LNC, but what you’re calling “fuzzy logic” in Quantum Theory I thought was nothing more than the use of probabilistic math. What is it you’re calling fuzzy logic?
Something that is in a superposition of states is in that superposition of states, and is not not (two nots) in that superposition of states. LNC non-disconfirmed.
You’ve been arguing for a view (an “account” as I put it) which says that LNC is a feature of our language or our psychological capacities, but is not valid over all domains in the physical universe. It seems to me that on such a view, the Liar’s paradox doesn’t “show” that logic isn’t bivalent, because on such a view, logic isn’t something that is or isn’t bivalent in itself–it’s whichever we make it by our use of it.
If it’s not truth-evaluable, then it doesn’t evaluate to either true or false (much as my dog does not evaluate to either true or false) and so can not yield a counterexample to LNC–since a counterexample to LNC must involve things that are true and false.
The question seems ill-formed. I haven’t intended to imply by anything I’ve said that LNC “exists.”
Found this paper over at philpapers.org:
The Metaphysical Status of Logic
I haven’t read it yet so apologies if it’s worthless.
I think you are equivocating here - it seems to me (a layman) that light is both a wave and a particle. Not simply that they have both properties. I’m not sure what that is supposed to mean, actually.
It’s also related to quantum physics - is an electron here? Well, we don’t know it has a probability of being there. A probability is not a true or false value, it is a third value.
That is why the LNC is not valid, because it attempts to reduce things to true or false, and not everything is true or false.
Shoot, if I asked you whether or not you considered yourself tall, you could answer one way or the other. I could then say that I thought you were the opposite. You would then have two values.
I would imagine that it does - at least it would indicate that the LNC is not the whole story.
Probabilistic math does reject LNC - as the probabilities and not definite; ie, an electron most probably could be here. This is not the same as saying that the electron is here.
And what is that superposition state? It is a probability. It seems to me that you are equivocating here.
If logic isn’t bivalent then how do you defend trivalent logic? Logic isn’t one system, btw. I’m not clear as to how you can accept other logic systems out there if you think that the LNC is universal.
You are begging the question here - you are assuming that only bivalent logic can make truth evaluations of things - which is exactly what is under contention.
This isn’t the case though - as, for example, other logical schemes can exist: Something can be either true or false or unknown or a probability. What is paraconsistent logic on your view? Also, your view cannot answer the liar’s paradox.
Fair enough.
Looks interesting, I’ll have to read it. By the way, I’m not saying that we should abandon the LNC. I think it is pragmatically justified (since it cannot be rationally justified without begging the question).
I have to go for a while, but one quick comment:
You’ve been saying LNC, but I think you mean LEM (the law of the excluded middle).
It’s the law of the excluded middle that says there can be no third truth value. LNC doesn’t say that. There could be a third truth value while LNC remained true.
Quantum Physics doesn’t invalidate LEM either, though. But I’ll have to come back later to expand on that.
Sheeeet. I am taking a ton of heat in a another thread for asking the same thing about Major League Baseball rules. No way YOU are getting off easy
Light is a quantum object, and behaves just like quantum objects ought to. There’s no necessity to expect light to stick to macroscopic particle-or-wave notions (though it is intuitively the obvious thing to do, and thus has been tried, but found unable to give a consistent picture). In fact, one could say that both wave- and particle-like behaviour are merely macroscopic approximations of the more fundamental quantum behaviour, which have merely the bonus of familiarity, but no other distinction to recommend themselves as fundamental to the world.
You can easily describe quantum phenomena using a bivalent logic, you’d only have to reject the principle of distributivity – what you arrive at then is surprisingly sensibly called quantum logic.
There are many other ways to resolve the liar’s paradox – one being that it only occurs when you use the same language as your object- and meta-language, i.e. allow one sentence to settle the truth value of another, or even itself. If you eliminate such self-reference by taking into account an appropriate ‘hierarchy’ of languages, the paradox does not occur; it is, in that view, merely an artefact of failing to make that distinction.
I recognize that the question wasn’t addressed to me, but your own answer touches on something that has always struck me as odd. If you were to remove all minds from this universe (provided, for the sake of argument, non-contradiction held), would that mean that the law of non-contradiction would cease to exist, in your view? What about other logical or mathematical propositions – would the sum of all angles in a triangle no longer be 180° (or some constant we have assigned that name to, at any rate), if the universe were mindless?
It’s a bit like asking if the universe is governed by the law of commutativity. The appropriate response is: Commutativity of what? On some interpretations of what you’re talking about, sure (five apples plus three more apples is the same as three apples plus five more apples), and on others, certainly not (typing “A” then “B” onto the keyboard has a different result than typing “B” then “A”). But then, this is mostly just that natural number addition is commutative whereas sequencing alphabetic characters is non-commutative, purely mathematical facts; the only slight connection to matters empirical is in the observation that the one mathematical construct can be used to model apple-collecting behavior and the other to model keyboard-typing behavior. Well, so it is with the law of non-contradiction too.
Clearly, there are contexts/interpretations in which the universal equation p & ~p = False holds [e.g., in describing the value of a stable computer register, where False is a register of all 0s, and & and ~ are the familiar bitwise operations]. But just as much, there are contexts/interpretations in which the equation p & ~p = False does not hold [e.g., in describing topologically closed regions of space, where False is the empty region, & is intersection of regions, and ~ is closed-complement [i.e., ~p contains all of those points which can be reached as limits of paths outside p]; for example, looking just as the surface of the Earth, supposing p were the region of points north of or on the equator, then ~p would be those south of or on the equator, and then p & ~p would be the equator itself, rather than the empty region]. And just as before, the salient observations are actually mathematical rather than empirical; bitwise operations form a Boolean algebra, which must have the LNC property, while lattices of topologically closed regions form only co-Heyting algebras, and thus can lack the LNC property. To the extent that there are any empirical questions around, it’s just in verifying that these mathematical structures do in fact model whatever behavior of voltages or apples or closed regions of space or what have you one is interested in.
“But”, you object, “my question is not ‘Is there some interpretation of terms under which p & ~p = 0 can be said to fail, even a physically natural or useful one?’ My question is about a very specific interpretation: not about voltages or closed regions, but about truth values. Does the LNC hold or not of truth values in our universe?”.
Well, I say, if you think merely saying the phrase “truth values” isolates a particular concrete structure external to us to be investigated empirically, you’re probably fooling yourself as to what truth-values are. There are various language games we play in which we assign labels to statements of some sort, these labels then being considered “truth values”. But the rules governing them aren’t imposed upon us from outside; they’re our rules, just like the rules of English grammar or financial transactions or what have you. In some contexts, we play a game where we think of every proposition as assigned one of two labels, “true” or “false”, following certain rules for how to assign labels compositionally to propositions built up from conjunction, negation, etc.; in other contexts, we perhaps assign labels like “Pretty much true” and “Eh, sort of, not really”; and, yes, in some contexts, we might well want to think of closed regions of space as labels/truth-values of a sort.
One answers questions like “What are truth values? What is their structure like?” not by laboratory experiment, but by saying “Well, what do I want the system of truth values to be? What am I interested in modelling?”.
It all depends on what you mean by things. On some interpretation of what you mean by “there is a third truth value” and “the law of the excluded middle”, there could be a third truth value while still satisfying the law of the excluded middle (e.g., any Boolean algebra gives a system of values satisfying LEM “internally”, and there is no difficulty in producing a Boolean algebra with more than 2 elements as seen “externally”; e.g., suppose there are four truth values, False, True, A, and B, with ~A = B, ~B = A, A & B = False, A v B = True, etc. We’d still have p v ~p = True no matter what p is, so we’d still have LEM in some sense, even though we’d also have a third and fourth truth value in some sense).
Alternatively, on some other interpretation of what you mean by things, LNC can be taken as excluding a third truth value in just the same way as LEM could be taken to do so [after all, LNC is just the dual of LEM, is it not?]. p v ~p = True could be read as “Everything is either true or false, and therefore there is no third truth value”, sure. But just as well, we could read p & ~p = False as “Nothing is both non-false and non-true, and therefore there is no third truth value” [perhaps this is a more natural reading when one thinks more specifically of p as ~q for some arbitrary proposition q; i.e., when one looks at the specific consequence ~q & ~~q = False of LNC].
So I would be wary of saying things like “No, it’s not LNC that has to do with bivalence, just LEM”; it’s all caught up too much in interpretational details for any such position to be useful, at least at that level of terseness.
I guess I was relying on the rule-of-thumb glosses
LNC = “No statement is both true and false”
and
LEM = “Every statement is either true or false.”
-Kris
Not ignoring the rest of your post, but what if someone asked “Is the universe governed by the law of commutativity of natural numbers under addition?”
-FrL-
I suppose I’d say “It’s odd that you drag the universe into it, but, yes, natural number addition is commutative (as a consequence of its definition, rather than of any special facts about the makeup of the universe.)”
Definitions are tools, and tools usually only work under certain conditions–usually conditions resembling those under which the tools were forged.
The definition of addition seems to be a tool that works under a very wide variety of conditions, most of which differ wildly from those under which the tool (the definition) was forged.
That seems surprising to me. It’s even more surprising if it’s true that the tool works under all conditions, even more if it works under all possible conditions.
I can explain this if those characteristics of the local environment that make it useful are characteristics of the local environment simply because they’re characteristics of the wider environment encompassing the entire universe. But now, though I’m not surprised that a locally invented tool should be so useful outside its conditions of creation, I am surprised if the conditions necessary for its usefulness are universal. Why should things have been that way?
(I’m kinda liking my other explanation offered above, my attempt to make sense of the idea that the LNC is a feature of our psychology or language rather than of our environment by arguing that LNC must always appear valid to us because we are measurement devices, and measurements can return only one value per measurement. Some account like this could probably be drummed up for a lot of the other a priorish laws as well.)
Interesting. It was none other than Niels Bohr who said, “When it comes to atoms, language can be used only as in poetry. The poet, too, is not nearly so concerned with describing facts as with creating images.”
I think its only weakness is that all it does is quantify. Kind of like Pablo Picasso’s famous retort about computers, “Computers are useless. They give you only answers.” Of course, that’s not to belittle quantification. Quantification is important.
I think its strong point is the same as its weakness. It’s one reason physics is our most scientific science. It uses math (and logic – math’s mother) to deduce and actually to calculate scientific predictions. Popperian philosophy teaches that science may be differentiated from pseudo-science by the former’s ability to make risky predications.
Einstein’s work, for example, was extremely math intensive. Therefore, it made extremely specific predictions about how the universe behaves. (Not how it works, necessarily. For all we know, fairies are carrying photons.) But Popper made the point that Einstien’s theories made predictions that could be tested, and therefore were scientific.
However, there are times when works of math (and logic — often, the difference is controversial) do not model the universe, but model something else: like systems and rules. Examples abound, like Gödel’s Undecidable Propositions. These aren’t about gravity or light speed, but they are nonetheless indispensable to good science. Science needs to know about the limitations of Peano arithmetic.
I think math (and logic) can describe anything, even itself.
If logic is to be considered separate from math (and it should be), then it’s an important tool. Inspiration is also an important tool. Einstein didn’t establish his body of work without first thinking, even doing “thought experiments”. Finally, I do think rhetorical skills are important. There are scientists like biologist Eugenie Scott, who writes in plain and understandable language without compromising the rigor of the underlying facts. And then there are scientists like David Stove, a curmudgeon with a giant chip on his shoulder who seemingly never met a contradiction he didn’t like.
Probably not. But once they are uncovered, math should be able to quantify them, logic should be able to model them, and rhetoric should be able to explain them. After all, if you discover something, but cannot communicate what you have done, then it is worth nothing to science as a whole.
Re: Frylock’s last post: I’m not sure what you mean. What does it mean to say that “the definition of addition seems to be a tool that works under a very wide variety of conditions”? Do you mean in terms of its ability to model a wide range of phenomena?
So far as that sort of thing goes, I don’t see great reason to be surprised that some abstract concepts are modelled by many phenomena, even if the concept was first discovered simply by looking at some one particular phenomenon. This is the nature of abstract concepts, right? A simple pattern finds itself instantiated in many situations. I’d be no more surprised by the fact that natural number addition is often useful than by the fact that lots of things are ordered pairs, or circles, or groups, or categories, or whatever. I’d just say “Well, of course; an ordered pair arises whenever you combine two different pieces of information, a circle arises whenever you have the constraint that some kind of distance must equal some particular value, and so on; it’s hardly surprising that these sorts of things happens over and over in many different contexts”.
Seems to me, most of the a priorish laws to which you refer are just laws of Boolean algebra, and they are often useful to us because Boolean algebras, like ordered pairs, circles, groups, integers, whatever, are an abstract concept of unsurprisingly wide application. [Why would Boolean algebras be of wide application? Because their equational theory is precisely the one generated by arbitrary operations on two labels. But why would the concept of two labels be of wide application? Well, if one is interested in drawing any distinctions at all, there’s a certain minimality… One can keep going down this path till one is convinced that one has hit upon an abstract concept of such simplicity that its ubiquity is no particular marvel]. That is, the a priorish laws are just consequences of bivalence, bivalence is just a fancy way of saying “I’m going to assign things one of two labels”, the use of a system of two labels is fairly natural on minimality grounds, etc. All the explanation of LNC one needs, in the context of a compositional labelling-system of this sort (which is perhaps the most frequent context in which it would come up), is provided by the truth tables for & and ~, and how they conspire so as that LNC is guaranteed, and the same for every other such “law”.
I don’t know if that meshes or not with your account.
Are you sure, in all this abstraction, you’re not missing the point somewhat? I think I see the aim of this discussion somewhat differently – for instance: it seems to be the case that there are propositions that happen to be true, in various senses of the word – if something exists, the claim that it exists is commonly thought of as true, for example, ‘true’ here apparently meaning something like ‘conforming to reality’ or something else appropriate. Nothing, perhaps, any deeper than identifying a collection of objects with a specific natural number – both an instance of modelling real world behaviour with appropriate concepts. And given such models, owing to their abstractness, it is perhaps indeed not surprising that one can find a wide range of applications.
However, the existence of such models, to my mind, is something quite surprising; that it is possible to find concepts such as boolean algebra or number fields or what have you, and furthermore, that we have the ability to understand and utilize them, appears downright mysterious. Neither does it seem necessary for the universe to admit (for instance) simple, classical logic, or indeed any logic at all, nor is it obvious to me why we would have an ability to discern these concepts any greater than, say, a cockroach. That both appears to be the case, to me, seems to be a legitimate cause of wonder.