Falsifiability dethroned as the criterion of science?

When you “explain” that the universe we see is the way it is because, in the space of all possible universes, which are unobservable, but imaginable, there will be one which has the properties to sustain life, etc…It might be better termed “modal realism”, it all depends on how “real” those other universes need to be, I suppose. One can never study them in principle, so it doesn’t really matter if they’re real or not. It’s apparently the culmination of String Theory research so far, referred to as the Landscape, and has some relation to Eternal Inflation theories invovling so-called “bubble nucleation”. These bubble universes, vacua, whatever you want to call them, are, as I alluded to above, causally isolated from our own.

The program is to assert they “exist” (one can use that term loosely), calculate they exist in infinite or incredibly vast numbers, and voila, you get our own reality, necessarily. Energies needed to test how these ideas are demonstrably relevant to the universe we can observe are quite possibly inaccessible to all but those with god-like powers, especially when you consider there is this infinte variety of possible vaccua, with no reason to expect that extra symmetries beyond the Standard Model of particle physics are spontaneously broken much below the Planck scale. It’s a phenomenologically unassailable position, essentially. You’re just plain lucky if you can do experiments to test any of it.

Apparently, this is the cutting edge. More than a few a Nobel Laureates have embraced it completely. So, yeah, falsifiability appears to be going the way of the dodo. Now it’s all mathematical consistency, beauty, and symmetry. Experiments are nice, but optional.

Hmm. I’ve recently read a book on string theory, Greene’s second book, and one on quantum gravity, and that’s not the impression I got at all. The physics is at the state where they are still working out the math and trying to see which competing ideas are really different ways of looking at the same thing, and trying to figure out what experiments can be done at our current level of technology. There was no sense that experiments were not necessary any more - just that since we can’t afford to do them, we had better work on the math. Could you give a cite from someone saying experiments are optional in principle?

If we eliminate verifiability and falsifiability as defining characteristics of science, then what’s the difference between modern scientists arguing about string theory and medieval theologians arguing about how the number of angels that can dance on the head of a pin? :confused:

Dang it, I’m just a simple country boy who’s read a few books and thought he knew a few things. This is one of the few things I thought I knew. Don’t take this away from me.

The Cosmic Landscape

Susskind refers to those who resist this program for fear it is unfalsifiable as “Popperazi”. He seems to just know experimental results will come some day, but for the present, and perhaps the forseable future, testable predictions are simply irrelevant. Our only hope is to “explore the Landscape”.

Greene appears to me to be someone who hoped there would be a selection mechanism that would inevitably produce one, or at least some small number, of Calabi-Yau manifolds, shapes in which extra dimensions are compactified, that describes our universe. This hasn’t happened. In fact, many have abandoned the hope of a selection mechanism altogether. Instead, they’ve attempted to show vacua with small cosmological constants are more likely to arise in the Landscape, and hence so is our universe, making ST “predictive” in that manner. That didn’t work out either, so far. Apparrently calculating such a probability is an NP hard problem, at least according to some researchers. It’s reportedly possible to narrow down and refine other predictive models, even when simulating physically relevant phenomena is an NP hard problem from first principles, but this is done with some guesswork and a lot of experimental comparison to the kinds of physical phenomena one wishes to simulate.

It’s true ST is sufficiently ill-defined that it could produce a more conventionally scientific research program. Thus far, after 30 years, it hasn’t, but few wish to all it unscientific. Hence, I can only conclude that we’ve effectively redefined “science” as an experimental enterprise only so long as you can do the experiments. Otherwise, if you believe your idea is so great it’s “the only game in town”, experiment can be deferred indefinitely.

We’ve had this before. Contrary to this strawman which has mysteriously sprung up about string theory, those involved are trying as hard as they can to yield observable consequences which could show it to be just plain wrong. The trouble is that these strings vibrate at such enormous energies that trying to get the maths just right so that the subtle, low energy vibrations we are privy to in our region of the universe is like trying to play a Rachmaninov piano concerto using a fork-lift truck: not impossible, just very very difficult with the mathmatical tools invented so far.

String theory could be proven outright wrong if really weird results start cropping up at the LHC next year, or if the Higgs or other superpartner particles don’t appear in the even bigger accelerators which follow. It could be proven very promising if they do: there’s useful tests right there.

As for extra dimensions, the overwhelming strength of the electromagnetic force in our region of the universe (a baby can counteract the puny gravity of the entire Earth!) means that we have only tested the inverse square law of gravity down to about a tenth of a millimetre. Deviations from that law at smaller scales (ie. gravity “leaking” from our three dimensions) would be a huge experimental boost for extra dimensions. Again, falsifiability is restored.

I think you kind of missed the point here. I wasn’t referring to string theory specifically, only using it as an example. My point is, if you remove verifiability and falsifiability as defining characteristics of science, then what’s the difference between modern scientists arguing about arcane scientific theories and medieval theologians arguing over arcane metaphysical doctrines?

Absolutely none of those things will make a difference. There’s already some evidence that low-energy SUSY may not exist. If that’s the case, the LHC won’t find it. It doesn’t matter, since, with the landscape, SUSY-breaking could appear at much higher energies. Trouble is, even if sparticles are found by the LHC, SUSY is its own beast. ST may need it, but not the other way around, so it just demonstrates ST isn’t completely wrong. The way to show ST might be completely wrong is to show there’s no SUSY. But how high up the energy scales must one go to demonstrate this? No one knows.

Extra dimensions don’t even need ST. There are a number of models out there evoking extra dimensions (and they’re not supergravity theories, either) that are particle theories, and don’t involve ST at all. Anyway, the important point isn’t that there’s plenty of room to test Newton’s inverse square law, it’s that it doesn’t matter if you find deviation at 50 microns or something ten orders of magnitude smaller than that. Or smaller still. All you have to say is they might be there, and if you haven’t found them, maybe you didn’t probe fine enough scales. This could very well hold all the way to the Planck scale, which takes unimaginable technology to probe directly. With an infinite or nearly-infinite menu to choose from, there’s no way to know, really. ST would hold “true” in any and all circumstances. It simply isn’t predictive in any conventional sense, and if it turns out one can test it this way, it’s just a matter of luck. I’d say this changes the nature of science rather radically. At any rate, it’s already happened. I no longer rail against it because it’s a fait accompli. I simply acknowledge now that the rules have changed, for all practical purposes. Falsifiability (at least, the kind mere mortals can achieve) is to be hoped for, not required.

Susskind put it here quite succinctly:

Ah, I see, sorry Lonesome. Yes, falsifiability or testable consequences or whatever else are still essential to science: it’s just that sometimes a great deal of work has to be done before the tests become clear. String theory is thus still scientific.

And sorry for misattributing the quoted question to you, Loopy. But those successively more powerful accelerators do provide useful tests. It’s no use complaining that they don’t provide definitive tests very soon, please. In the spaces between new colliders, the groundwork will be put in to try and try again to elicit consequences which forbid certain formulations of ST. This is an eminently scientific endeavour - who says that something which requires trillions of dollars and decades or even centuries of painstaking effort can no longer be “science”?

It would be hugely disappointing if, no matter how hard we tried, the universe was such that the ultimate tests of the ultimate hypotheses were ultimately impossible in practise. But declaring so now just seems the height of impatience. What’s another few centuries in 13.7 billion years, after all, especially seeing how far we’ve come in the last couple?

You seem to be missing the point of what I’m saying. I’m saying that it doesn’t matter if we’re patient or not. The ground rules, as presently defined, is that the need, or thack thereof, for patience, is an accident. And, as virtually any and all possibilities “exist” in some sense, there’s no reason to expect ours will be amenable to conventionally “scientific” endeavors. If it is, great. If it isn’t, c’est la vie. To posit one can make galaxy-sized machines (or larger) to test a theory if need be, and hence it is testable, is pretty audacious, but that’s where we are.

So, I acknowledge it, I no longer argue against it, I just state, for the record, that it would appear folks who we call “scientists” are doing “science” producing hypotheses, based upon a theory they themselves state explicitly is so imcomplete they “don’t know what it is”, and quite possibly cannot be falsified by any means imaginable. They’re quite hopeful it can be before they die. Or their great-great-grandchildren die. Or maybe later, no one knows. If we’re going to talk about “science”, and try to define what it is and isn’t, we must be honest and acknowledge that it may be evolving beyond the strict Popperian model, if ever it adhered that that in the first place. It probably didn’t, and we’ve just been fooling ourselves in an attempt to shout down creationists, and so forth. It’s not a very comfortable acknowledgement to have to make, that “science” is what the scientists say it is, but what else have we got, at this point? I no longer claim to know.

Then I’m not sure where we disagree, if at all. I thought you were suggesting that the mere possibility that “conventionally scientific” tests might be practically impossible is reason enough to declare science to have been radically changed now. Of course, such a declaration could have been made by anyone in history since they had no idea what might be amenable and what might not: far better to optimistically have a go anyway until otherwise demonstrated (as, indeed, Brian Greene says on the same page as Susskind).

But in what way is this a qualitative shift? Surely the timescales from hypothesis to test have simply been lengthened? Einstein’s miracle-year papers and General Relativity were works of theoretical physics in which he mucked about mathematically from starting premises without necessarily knowing whether he’d get anything testable from it - indeed, he had to wait until 1919 until, luckily, the total eclipse allowed Eddington’s observations. Witten et al. are, IMO, doing just as Einstein did at his desk. They might just have to wait longer.

The difference there, surely, is that the falsifications have already taken place decades or centuries ago, and some people simply won’t accept so?

I consider the idea that Einstein developed SR from pure thought to be misinformed, but that’s another debate. At any rate, experimental support already existed, though A.E. claimed he didn’t much care about it. GR is much more a pure product of a struggling intellect, but the thought experiments that led A.E. to the equivalence principle suggested others that were emminently doable even with 19th century technology. One needn’t wait all that long for eclipses or iterative improvements in telescopes. As it is, few took GR terribly seriously until Eddignton’s (somewhat flawed, but still reasonably solid) experimental tests were performed. Even Einstein stated his calculation of the precession of Mercury’s perihelion gave him palpitations, though his confidence in his theory was so solid he claimed he didn’t much care about Eddington’s result.

I think, in recent memory, perhaps the most audacious recognition of achievement in theoretical physics was the Nobel Prize awarded to Weinberg, Salaam, and Glashow, for their model of electroweak symmetry breaking in 1979. It wasn’t until a few years after that the W and Z bosons were discovered at CERN in 1983. Presently, some folks even view the Higgs boson with some suspicion, even though the Standard Model absolutely needs something like it, some sort of Higgs mechanism, for the already-tested electroweak theory to work. Over here you’ve got folks who are cautious about the existence of the Higgs, over there you’ve got folks who posit a multiverse from completely untested, and quite possibly untestable, principles of total unification and symmetry. I cannot think of another development in modern science that has yielded a mature field which even its most ardent supporters are unsure is testable. I don’t think Einstein thought that about GR. I don’t think Weinberg thought that about the electroweak theory, or the Higgs. Is this really a qualitative shift, perhaps a paradigm shift of the sort Kuhn was fond of? I don’t know, but it sure looks that way to me.

Loopy, wouldn’t the inability to find evidence for extra dimensions at even very small scales falsify String theory? Finding them wouldn’t prove it, since other theories would also fit, but showing they do not exist would falsify ST. In the meanwhile it is believed by many because it explains more better.

Well, we beg to differ there, then. I think both Einstein and Weinberg et al. just went right ahead and mathematized, trying this or that avenue or dead end out until they found interesting relations or permutations, with barely a thought about actual experiments while they were doing that crucial mathematizing. And I think Witten et al. are doing precisely the same. There is, and always was, the possibility that the actual practical experiments could be foiled, somehow.

If what I’ve read is correct, this far, yes. The most internally and externally consistent models using strings require there to be 9 or 10 extra degrees of freedom than can be accomodated in a 3+1d universe. That might change, perhaps, but, for now, it looks like you literally need many new directions in which things can wiggle. Some of those other directions might be small or vast, but typically they’re thought of as being so small we don’t notice them. It’s also generally thought that lengths small than the Planck length have no physical meaning, so if you can probe lengths that small, and see no evidence of non-point-like particles or changes in the behavior of gravity (like the ability to create micro-black holes with event horizons larger in diameter than the Planck length), then you can probably rule out theories that require extra dimensions.

We shouldn’t kid ourselves about what kinds of machines that would take, though. It’s fair to say we’re not even completely sure if probing Planck scale physics directly is even doable in principle. Say there are no big surprises in constructing the proper instruments. If it’s even reasonable to try, it will take vast resources. Colliders with lengths measured in terms of stellar or even galactic scales. I just think there’s such an unprecendented collection of “ifs” built into current theoretical physics it’s presently being done on a whole new level. It’s unlike anything we’ve ever seen before. The LHC will temporarily give people plenty of interesting things to grapple with for a while, but, again, there’s no reason to assume, presently, it will make a whit of difference what the LHC finds, in terms of the pursuit of a Theory of Everything. Maybe it will blow the field wide open, maybe it’ll just find a Higgs and nothing else but stronger confirmation of the Standard Model. Maybe the ILC will turn up something promising, or the next big space telescope, or other, more powerful instruments we haven’t yet conceived of. Or maybe none of them have even a ghost of a chance. No one can say for sure. And, if even all the experiments in our lifetime and beyond fail to test this putative Theory of Everything, I see no reason to assume people will stop working on it. Ever. That’s the reality of Science, as I see it, and it’s weird.

Well, if that’s the case (and I’m pretty sure it’s not entirely so in the case of Einstein and Weinberg, and quite possibly so in he case of Witten), perhaps theorists have always behaved as if experiment were superflous, and even thought so. It that’s correct, then Popperian falsifiability is a quaint myth in practice.

But it does fail to hold up ST as non falsifiable in principle- being non falsifiable as a matter of practicality is different.

It brings me back to point of science, and here I’ll correct myself. Science is the attempt to understand how things work in order to make better predictions about future observations. Any aspect of physics that fails to provide even the hope of making better predictions about future observations would indeed cease to be Science, even if it is a scientist doing it.

Well, Planck scale physics is so far off it’s hard to know. We don’t even know if we need to probe it. Only time will tell. I personally had hope a predictive theory is something scientists would feel duty-bound to pursue if they wished to retain the title, but that’s no longer a given, it appears. So we simply wait and acknowledge the messy reality.

No, it just means that there’s a devision of labor. Even if Popper was right, and the scientific enterprise must involve falsification, it doesn’t follow that ever practitioner must be thinking about how to falsify his or her work at every stage. That is, it may be that the scientific community as a whole is characterized by the production of falsifiable statements. But these statements will be the product of the labor of many workers, and some of these workers can ignore issues of falsifiability while contributing their piece of the labor. Indeed, for some it may be a harmful distraction. But that does nothing to contradict Popper’s thesis.

I hope, in this case, that you’re right.