Have there been any autistic savants of philosophy?

In general, in quantum mechanics (under most standard interpretations), ‘X is Y when unobserved’ doesn’t hold true – assuming that the spin of an electron has a definite value even when it isn’t being measured, together with some other, usually thought to be harmless assumptions (like locality, or noncontextuality, which roughly means that the measured value should be independent of the context of the measurement), yields a contradiction. The usual way out is to reject realism – i.e. the assumption that a quantum measurement reveals a pre-existing reality, such as the reality of the orientation of the electron’s spin. It’s also possible to reject locality and noncontextuality, which is the route taken by Bohm theory. But either locality or noncontextuality together with realism yields a contradiction – the former is exemplified in Bell’s famous theorem, the latter in the less well known Kochen-Specker theorem.

So there can’t be evidence that X is Y when unobserved because generally, X isn’t Y when unobserved.

(I see on preview that iamnotbatman has already mentioned Bell’s theorem, but I’m still gonna let this post stand, because IMO, Kochen-Specker yields yet tighter bounds on ‘hidden realities’: either they don’t exist, or reality depends on how you look at it – both of which seems to be difficult to square with the notion of something objective happening behind our backs.)

As for how this is to be squared with the possibility of making predictions about the world, well, it isn’t: quantum mechanics doesn’t make any definite predictions, just probabilistic ones, so for any given outcome, we are indeed ‘lucky’ that it obtains. It’s just that we can quantify how lucky. At any given time, anything could indeed happen – but this would only make science impossible if anything could happen with the same probability. But the set of possible outcomes is constrained, and these constraints provide the possibility of probabilistic predictions. These aren’t metaphysical, imposed-from-up-high constraints, but simply follow logically.

Perhaps an example makes this more clear. Picture a toy universe comprised, in a sense, of atomic propositions. Such a proposition can only be true or false, so any given ‘measurement’ will yield either 1 or 0; as what these propositions are about is left wholly arbitrary, either obtains with 50% probability. Now, model ‘macroscopic’ measurement as compounds of atomic propositions. Any given measurement then yields a string of binary digits. Macroscopic properties, such as ‘the moon is made of rock’ then correspond to properties of strings of bits, such as ‘there are equally many 0s as there are 1s’. Probabilities then arise from counting: there are very few strings that are comprised only of one kind of digit – two, in fact, so that the probability of a measurement having the outcome ‘all digits equal’ is very small (2/2[sup]n[/sup] if the ‘macroscopic measurement’ is composed of n atomic propositions, there being 2[sup]n[/sup] n-bit strings), while the probability of there being as many 1s as 0s is considerably higher. So, in this toy model, things like ‘the moon is made of cheese’ correspond to strings of all 1s or 0s, while ‘the moon is made of rock’ correspond to strings of equally many 1s and 0s. Clearly, finding the latter is much more probable than finding the former.

So even though there’s nothing ‘rocky’ out there in between observations, we can make predictions about the likelihood that we will see rock when looking at the moon, versus the likelihood that we will see cheese, just as we can make predictions about the likelihood of randomly generating uniform or varied strings of bits (or other properties of bit-strings).

I’ll respond to the whole post later but to clarify on this point–in my post I wasn’t talking about observation in the QM sense. I meant something more like “unsensed.” I didn’t realize iamnotbatman was talking technical.

Well, but isn’t ‘sensing’ just a special case of QM observation, in the last consequence? To me, and I think probably to most physicists, ‘sensing’ something is the same kind of thing as, I don’t know, that thing being hit by a micrometeorite, or having a few photons bounce of it, or any other kind of interaction – which are all in the end just QM observations.

In any case, I think I was mostly reacting to your assertion that the feasibility of doing science depends on some kind of stable causal continuity behind our back – by showing that even systems that have no kind of continuity at all, that are completely and irreducibly random, may give rise to ‘macroscopic’ properties that have some certain continuity (like ‘being composed of equal amounts of 1s and 0s’), that even behave lawful, if only statistically, and that thus there is nothing inexplicable in our successful calculations, and that doing science still makes perfect sense in such systems.

I think this is a powerful conclusion, actually, as it has at least the potential of doing away with the allegedly unanswerable question of where the laws of physics come from, of who or what made the rules – what ‘breathes fire’ into the equation, as Hawking once put it. The argument is that even if we find some ‘final theory’ that describes all of physical reality, the question of why it had to be this particular theory still remains unanswered, and thus, science can’t answer all questions, so neener.

But if the laws emerge in the above way, that question is moot; nothing decides on the rules the universe has to follow, rather, they just come out by themselves.

Sensing requires a particular threshold. Measurement doesn’t.

So things can exist unsensed. (I originally thought iamnotbatman was claiming there’s no evidence that things exist unsensed.) This is so even if it’s false that things can exist un’observed’ in the QM sense of ‘observed’.

Causality itself may turn out to be emergent from something more basic, so when I said “causally stable” I should have said “stable in terms of whatever it is that underlies apparent causality.” (I didn’t say it because I was trying to avoid complicating things.) Patterns like “there are exactly as many ones and zeroes” count as instance of such stable grounds for prediction. To think that predictions are likely to be true, one has to think there is some such stable ground “out there” even when it’s not being observed. If there weren’t, there’d be no reason to think our predictions have any value.

If there were a true final theory, the question “why this theory and not others” would not be addressable using scientific techniques. Asking the question wouldn’t yield testable predictions. So scientists, in my view, need not concern themselves with it. (As scientists. As people, they’re free to concern themselves with whatever they like of course!)

Of course! But I’m just saying that in order to think predictions are meaningful and can be likely to be true, we must assume there are such laws–emergent however they may be.

I’m not sure about that. Certainly, measurement requires the signal at least to exceed the threshold of detectability, or of noise – it must in some way be distinguishable from all the other shit that’s happening.

But really, the important notion is interaction here, not measurement. Is there a threshold of interaction? One could perhaps argue that quantum mechanically, there is, as there’s a smallest quantum of action (i.e. Planck’s constant) that can be transferred – analogously, for a photon to excite an atom, it must exceed the minimum energy necessary to elevate an electron into a higher shell. Though of course there are continuous spectra in systems that aren’t bound in some way.

However, even if things didn’t exist ‘unsensed’, I think my argumentation would be much the same – though from any ‘sensing’ to the next, change may occur completely randomly, lawful behaviour may emerge just as well.

Well, it’s really the proposed answers to questions that can yield falsifiable predictions, no? And proposing emergent laws might well do that – for instance, one prediction would be that the laws are not valid absolutely, but rather statistically, and thus, might be violated on sufficiently ‘small’ scales. In that vein, the clearest example I have in mind for an emergent law is the second law of thermodynamics, where it is indeed the case that microscopic violations exist.

I hope I’m not just a stickler for semantics here, but emergent laws aren’t assumed – the assumption is that there are no laws at all, and yet they arise out of inevitable collective behaviour.

There’s a difference of priority here: it could be the case that there is no such emergence, that if the laws weren’t built into the fundamentals, then there would be no laws at any stage or level. If things were like this, then it would be true that the question of ‘what made the laws’ would be unanswerable, and I think only then it would also be true that there would be no way of doing science absent some rules operating in the background.

Sorry for the unclarity–I just meant measurement doesn’t require the same threshold as sensing does.

Sense sensing requires a higher threshold, it makes sense to say there can be evidence (albeit non-sensual) for the existence of nonsensed things, even if there can’t be evidence for the existence of unmeasured things.

But if you’ve got a true final theory, you won’t be able to use answers to the question “why these laws and not others” in order to make falsifiable predictions.

I’ll stickle your semantics even worse. You said “there are no laws,” then you immediately said “they arise.” I don’t think you can have this both ways! (Nor do I think you mean to). They either arise and therefore exist, or they don’t exist.

I’ll wait to see your response to that before I’ll feel like I know enough about your point to say anything further.

If there aren’t rules in the background, then we have no reason to have confidence in any predictions we might make. Don’t you agree? (BTW these rules don’t need to be deterministic. I’m not arguing for determinism!) If you don’t agree, then I need to know what could possibly justify us in having any confidence in our predictions, if we genuinely think of it as a realistic possibility that fundamentally, things don’t follow any rules (again–whether those rules are statistical, deterministic, or whatever).

I do think it’s logically possible–even physically possible in a sense–that we’re victims of a massive coincidence and actual reality follows no rules whatsoever. This “philosophical” (in iamnotbatman’s derogatory sense) view is one we can all recognize as fully compatible with our experience thus far. I’m not saying we know it to be false, or even that we have any good reason to think it false. What I’m saying, rather, is that we can not have any confidence in the in-principle reliability of prediction unless we assume it’s false. If we don’t assume that it’s false, then we have no reason to think prediction is in principle reliable (because literally anything could happen for all we know, no matter what we have observed (or think we have observed) in the past), and if we don’t think prediction is in principle reliable, then we can’t test hypotheses etc., and all the other good crunchy stuff that goes into the Science mix.

You are continuing to assert that my definition of “philosophical” in the context of a lack of empirical knowledge is derogatory. It is not. If one is faced with a question that cannot be answered empirically, it is common to describe the argument as ‘philosophical’, as opposed to scientific. Am I wrong? Are you asserting that such questions are outside the domain of philosophy?

Especially in discussing quantum mechanics interpretive issues, it is extremely common among physicists to describe those questions as ‘philosophical’ that cannot be answered through empirical test. In other words, science is a subset of philosophy, but philosophy is not a subset of science. I thought this was common knowledge, so I find it strange that you continue to act so bothered about this.

An example of common usage is (from Interpretations of QM wiki):

the precise ontological status of each interpretation remains a matter of philosophical argument [because no empirical test can currently distinguish each interpretation]

Sorry, I didn’t mean that as seriously as it may have sounded.

But, yes, strictly speaking, I’d say it’s misleading at best to call something philosophical just on the grounds that its unempirical.

If you said something like “philosophical at best” that’d be different. But simply calling it “philosophical” misses a lot that’s important, and makes philosophy look like a different kind of endeavor than it is.

I actually don’t think “how to interpret quantum mechanics” is a very good example of a philosophical issue (despite what Wikipedia says), not because it’s unempirical, but because (as far as I know) it makes not one whit of difference what one says about it. Philosophy is (supposed to be) about things that matter.

But at least that’s about serious science, and that’s something. Other examples you used (“The moon is made of green cheese when unobserved”) are even worser examples of philosophical views.

But like I said, I’m not as serious about this as it may have seemed. I do think the view is a derogatory one, (Philosophers are the guys who put a lot of deep thought into questions like ‘is the moon green cheese when we’re not looking at it?’) but I know you don’t mean anything by it! (And it is philosophical to ask “what is the status of a question like ‘is the moon green cheese when we’re not looking at it?’” and it’s easy to mistake that for asking whether the moon is green cheese when we’re not looking at it.)

I think the question about the moon is fundamentally the same as a lot of religious questions that are taken seriously by real philosophers. Unfalsifiable philosophical questions have a rather poor history in my mind, though I don’t view them all negatively. I, for example, have rather strong views about QM interpretational issues, which I am not ashamed to say are purely philosophical.

I don’t believe that “autism” is a useful label or category, I don’t think it serves any purpose. It’s just at the far-end of a wide spectrum of human behaviour… and I also feel that a lot of people labelled with autism or asperger’s syndrome play to it a bit, sort of like the placebo effect.

That’s interesting. Why do you have strong views about QM interpretational issues?

Regarding religious questions, Philosophers who spend time on them typically (when they’re being philosophers., ie in professional contexts) are working on questions like “Is the concept of God coherent?” “How are our moral ideas logically related to concepts of God,” and things like that. The concern is over logical relations between concepts. Certainly you’ll find an article arguing for or against God’s existence in most issues of Faith and Philosophy–but:

A. Arguments against God’s existence are usually arguments that the concept of God is incoherent

B. Arguments for God’s existence are philosophically interesting only in light of what we learn from them about logic. (That’s probably a controversial statement, but it’s my view. And yes it’s a philosophical one!)

Similarly, though no professional philosopher works on the conceptual coherence of green-cheese-ism, if someone did do some thinking about that, I’d definitely say they’re doing philosophy. But that’s very different from simply staking out a position that the moon is made of green cheese.

There are a lot of reasons why two unfalsifiable interpretations are not equal, philosophically. For instance, one can invoke Occam’s razor, or the unification of otherwise disparate phenomena, and so on. I find some QM interpretations to be simpler and more unifying than others, and to have a more coherent logical structure, and thus more likely to represent an underlying physical reality, if it were to exist.

Let’s please be clear that I have not in this thread argued that the moon is made of cheese or anything of the sort. I have merely offered that the proposition:

The moon is made of cheese, except when you look at it or try to measure its properties

is unfalsifiable. While I take this as obvious, some in this thread took an issue with it.

Do you consider the statement in question to be objective, or subjective?

The proposition:
The moon is made of cheese, except when you look at it or try to measure its properties

Is unfalsifiable as an objective claim as well as a subjective claim.

I’m not following. Is it an unfalsifiable objective claim or an unfalsifiable subjective claim?

It is unfalsifiable both as an objective claim and unfalsifiable as a subjective claim.

Objective means subject to verification by others in theory.

Subjective is the opposite.

If the italicized statement is compatible with the possibility that someone could have seen the moon when it was green cheese (it just happens that no one ever does) then I’d say it’s objective.

If not, then I’d say it’s subjective–though with a lot of hesitation because I’m not sure exactly what it might mean to say it is green cheese if there are no conditions under which the statement could in theory be verified.

Well, if the answer is ‘because these laws are statistically emergent rather than fundamental’, then the prediction that they may fail for sufficiently small samples is falsifiable at least in principle, isn’t it?

I think we’re talking past another somewhere, but I can’t see where, so on pain of just repeating what I said more slowly and loudly (which I don’t mean to be insulting):

Take the aforementioned random bit-string universe. Any sensing you do corresponds to the bit-string being randomly generated afresh. There are manifestly no laws guiding this; if you look at the moon, and see it being rocky, it means that a random bit-string was created that contained about an equal amount of 1s and 0s. If you look again, and your observation remains the same, a fresh random bit string has been created that again contains as many 1s as 0s. But the second bit string doesn’t follow from the first in any way – they are absolutely uncorrelated. This is as manifestly and concretely lawless as I can manage to imagine.

Nevertheless, we can make predictions, that have at least a good chance of being true (‘next time I look, the moon will be rocky’). But these predictions don’t depend on the existence of fundamental laws; they are possible because of inevitable collective, stochastic behaviour. It’s only at this point that the laws emerge – they are not logically prior to our ability to make predictions. We don’t need to require that there are laws that govern the behaviour of the world in order to make predictions, we can assume there are no laws, and nevertheless find ourselves with the ability to make predictions, if only statistical ones. At the fundamental level, the laws don’t exist, and they continue to not exist on that level despite their arising on ‘macroscopic’ scales.

The reason that they arise is ultimately because of some effective information loss: the fundamental, random ground level is incompressible – there exists no description substantially shorter than just writing down the bit string that completely characterises the state at any given ‘time’ (i.e. perhaps instant of sensing, or something). But redundancy creeps into the macroscopic description, since microscopic details may not matter, in a sense: there are many possible microscopic states that map to the same macroscopic one. Thus, the description at this level becomes compressible, and by that token, predictable – it effectively follows laws. So it’s the macroscopic sensibility only to aggregate properties that introduces both predictability and lawfulness, alongside each other; but fundamentally, the system is still completely lawless.

There’s another way for the world to be completely random and lawless, and yet for prediction to be reliable. Imagine, for a moment, it is in principle possible for all your experiences to be simulated on a computer. So that there exists a computer program which, if run, will give rise to exactly your experience of you being yourself in this world. This computer program will be a succession of states, each of which can be represented, say, as a binary code, and each of which corresponds to a moment of your existence (the correspondence need not be literal).

Now, what if I exchange two steps with one another? Would you notice any difference? If steps n + 1 and n are exchanged, this doesn’t change anything; step n + 1 still contains the ‘memories’ of step n, so subjectively, to you, from the inside, things will seem the same as before. So the order of states does not matter to your experience. And what if I miss a step somewhere? If step n - 1 was followed by step n + 1, would you notice step n missing? Again, it seems like you couldn’t possibly.

So now let’s get back to the random bit string universe. If there is a large enough succession of large enough random bit strings, eventually, one will correspond to one step of the program that implements the computation corresponding to your experience. And eventually, there’ll be another, maybe an earlier one, maybe a later one – as just seen, that doesn’t actually matter. So from this random universe, your experience will arise.

This is different from Last Thursdayism in that you can have confidence in your predictions: for the next step of the computation, in which these predictions (provided they were reasonable) hold true, will eventually occur, or at least, it will seem to you as if it did. But all the while, the underlying ‘real’ universe is completely random.

Had an entire response. Deleted it accidentally. It’s all gone.

You don’t happen to have these things updating to your email, do you?

The central gist of it was this:

How can there be “inevitable” outcomes stochastically speaking without rules governing those outcomes?