So...black holes don't exist now?

Yes, there are many contexts, even within science, where the distinction between “see X” and “infer the existence of X via a lengthy and complex sequence of inferences supported by many defeasible assumptions” doesn’t matter. However, in the context of a discussion over whether or not Xs actually exist, it matters very much.

The story is on Yahoo now, and emphasizing the implication that it contradicts the standard version of the Big Bang.

Thanks HMHW.

What I especially dislike about this is that they claim that they have “proven, mathematically, that black holes can never come into being in the first place”, when what they appear to have done is run a numerical simulation. They haven’t mathematically proven anything, that I can see.

That exact phrasing is in the UNC press release, so no blaming the journalists for this.

Sorry to prolong the hijack, but: It started at Los Alamos before being moved to Cornell – see the brief history in the Wikipedia article. Not sure where the “xxx” came from, except that it always had a strange sense of humor associated with it. (See the “robots beware” page, for example.)

I think we’re seeing the same objection, but I phrased it clumsily before. On thinking it over more, I can express what I think is the issue more clearly:
1: Hawking radiation is negligible for a stellar-mass black hole. If it’s significant in their models, they must be assuming a much smaller black hole.
2: A real horizon does indeed start out as a single point and expand out from there. They might be assuming that, before the real horizon has expanded out to its full size, it would radiate significantly.
3: However, a real horizon does not have any locally-significant effects like Hawking radiation, and in fact you can cross a real horizon while in perfectly flat space indistinguishable from any other perfectly flat space. What you need for Hawking radiation is an apparent horizon.

In other words, if they confused the real horizon for an apparent one, they might have seen significant Hawking radiation from it, which might have interesting effects in their model such as halting collapse. But if this is so, it represents a significant flaw in their model.

All that said, however: Modeling collapsing stars is exceedingly difficult, and it’s only been very recently that physicists have been able to get the models to explode at all. If they’re getting interesting and nontrivial results out of their model, it might be a sign that they’re using novel new computing techniques which might be able to improve other models… once they get the correct physics ironed out.

Not backing down on this.

Many things in physics from the smallest smalls in quantum mechanics to largest larges in cosmology cannot be literally seen to take place, even by instruments. Even so, physicists use the results that *are *detectable by instruments as confirmation. I cannot imagine any working physicist insisting on the distinction in practice. You might as well ban a physicist saying, “Hey, did you see that result?” instead of “Hey, did you infer from the electron traces on a computer screen that resulted in a stream of impacts of photons on my eyeballs and interpreted through nerve pulses in my brain and mapped to a pre-existing cipher of meanings that other purported individual sentient beings have done a reverse transfer of information garnered from the results of a electronically-generated set of numerical transformations that implied that unseen and unseeable objects haven’t the property of physical manifestation?”

Sorry. This is silliness that’s hijacking an interesting thread. Context is everything.

Ah, sorry, I thought you meant something I wrote. But you’re right that we need incorrect science, in a sense as much as we need correct science, in order to find the way forward. Negative results are still results!

What sticks out to me about this is the apparently casual way Dr. Mersini-Houghton claims, “Physicists have been trying to merge these two theories—Einstein’s theory of gravity and quantum mechanics—for decades, but this scenario brings these two theories together, into harmony…And that’s a big deal.”

Yes, that’s a big deal. Really big. If she really did accomplish this, it’s world-shattering. I’m not remotely qualified to evaluate the math behind the claims (I’m just a musician!), but something about this reminds me of the cold fusion debacle in the late 1980s: premature claims, lack of peer review in advance of the big claims being made public, etc.

It’s hard to tell just from the press release, but indeed that’s not a proof. It’s annoying when a demonstration is conflated with a proof. But this could just be a fault with whomever wrote the press release.

Thanks for this!

But the stellar collapse method being shown as wrong, if valid, is still a massive overhaul, correct?

Dr. Mersini-Houghton was quoted as claiming that it is. See above.

I think this is especially important what the claims are so grandiose.

Exactly!

Thanks again, I look forward to reading more of your thoughts on this topic if you have the time.

One major point: much of what people are complaining about comes from the press release.

A press release, even from a university, is not a trustworthy document. I would not expect the science to be accurate at any more than a layman’s level. I wouldn’t even necessarily believe that the quotes from the scientist are accurate. They may have passed through several hands before the writer saw them; they may be simplified to make them understandable or goosed to make them more interesting. The scientist was probably not given any chance to check the release before it went out.

Of course, all this shouldn’t be true, and I hope that at many places better care is taken. But none of it would surprise me in the least.

A good rule of thumb is to never believe what any press release ever says, but to check the original if at all possible.

I would expect that a university press release would be passed by the original author before being released. Maybe that’s just wishful thinking on my part, but that’s what I’d expect.

The obligatory cartoon.

Thanks for pointing out that there’s a distinction between the real (aka “absolute”) event horizon and the apparent one, a bit I wasn’t aware of. Here’s a pithy tidbit from Wikipedia that I’ll have to study in order to interpret, but which seems to give a good clue concerning the distinction:

The distinction broadly speaking can be described as follows:

An event horizon of a black hole is a boundary of a region in spacetime from which nothing can escape to infinity. Being a boundary in spacetime means that it’s existence is completely independent of whichever spacetime coordinates you choose to use. However this definition of a black hole, and the surface which defines it, often seems too rigid, mainly for couple of reasons related to the notion of ‘escape to infinity’.

The first reason is that relying on ‘escape to infinity’ means formally defining what that means and the usual formal definition only applies to a very specific class of spacetimes. Slightly worryingly the spacetime that best represents our Universe on the largest scale does not necessarily belong to this class. It seems reasonable though to create similar definitions of ‘escape to infinity’ for other spacetimes, however it’s clear also that there are some spacetimes in which the notion of escaping to infinity simply doesn’t exist. One example of where there can be no notion of escaping to infinity is a closed Friedmann Universe, which was up until a few years ago often believed to be the cosmological solution that represented our Universe. And there’s every reason to think that compact objects that looked like black holes and quacked lack black holes in all other important respects could exist in a Friedmann Universe.

Secondly knowing whether something is going to escape to infinity usually entails knowing the entire history of the spacetime. For example if we have two BHs A and B which at the current time are virtually identical it could be, in theory, that A and B have event horizons that lie in radically different locations due to some extremely divergent behaviour at a time very far in the future. It may even be that B doesn’t have an event horizon due to the late-time divergence and thus under the most technical definition isn’t a black hole, despite being virtually identical to the black hole A at the present time. Generally speaking it might not be practical or desirable to describe the current state of a black hole by a surface which depends on the complete history of the whole spacetime (NB by current I implicitly mean in terms of an observer at rest and far away from the BH)

We might want to ask then what other surfaces might be useful in describing black holes, and even objects that may not meet the most stringent technical definition of a black hole (but for most purposes we would consider to be BHs).

One property of the event horizon of an eternal none-changing black holes, like for example the classic Schwarzschild black hole, is that light rays that travel radially outwards from the BH are frozen at the event horizon in the standard coordinates used to describe the POV of a faraway observer. It is fairly easy to see why this should be, but slightly less well-known is that for a BH with some form of time-dependence, the surface where light rays appear frozen in some suitably chosen coordinates doesn’t generally coincide with the event horizon (it may be inside or outside the event horizon). This surface is called the apparent horizon.

Unlike the event horizon, the apparent horizon is intrinsically coordinate dependent; i.e. exactly where it lies depends on the coordinate system employed and it may exist on some coordinate systems, but not in others. For example an observer crossing an event horizon of a Schwarzschild black hole will see light locally travelling at c, so we might suspect that the best coordinates for describing in-falling observers don’t have apparent horizons. A useful property of the apparent horizon depends on the current state of the black hole rather than the complete history of the spacetime and therefore it is usually easier to find. It’s also related to several important properties of BHs: for example Penrose’s singularity theorem shows that the existence of an apparent horizon means that the spacetime contains a singularity, as long as certain other conditions are met. An apparent horizon is also a condition required for Hawking radiation (though I may’ve misspoken on the issue earlier in the thread as it may be a bit more complicated when collapse is involved).

So…does this mean John Preskill owes Stephen Hawking a dollar?

Are Gravastars still viable?

Read “The Bit and the Pendulum” for a fascinating book on the relationship between conservation of information and the 2nd law of thermo, and how it relates to black holes and other topics. Published in 2000, it might be a tad dated, but I bet the framing of the issues would still hold up.

Thanks, fascinating! That jives with my interpretation of the Wikipedia bit, with considerable amplification. I appreciate your ability to explain these matters in terms that don’t require a PhD to follow.

I’ll try not to extrapolate. It seems to me that a lot of misguided posts here happen when non-experts extrapolate (sans-math) from explanations to laypeople. :slight_smile:

Have you written any books?

Oh boy. Thanks for that. More stuff to ponder!

Some people would argue it’s more fundamental than that, actually. But even without such extremes, conservation of information is a fundamental tenet of quantum mechanics: the evolution of the quantum state takes place in such a way that it’s got all the same information at any point in time, that is, knowing the state (and dynamics) at any point in time allows you to compute the state at every other point in time. Actually, the same is true in Newtonian mechanics—it’s just a re-statement of the fact that time evolution takes place deterministically.

But of course, you’ve heard that quantum mechanics is an indeterministic theory—how does that square? Well, in QM, the state can change in two different ways—via the usual quantum evolution (governed by Schrödinger’s equation), and during measurement. It’s only the latter during which indeterminism enters the picture (and even there, it depends on the interpretation—the many worlds interpretation, for instance, is completely deterministic, as are (most) hidden variable interpretations like Bohmian mechanics). For a theory like quantum mechanics, it’s not quite as useful to appeal to determinism as justifying conservation of information; but it turns out that it’s enough to say that for all alternatives that could occur, their probabilities must sum to one—in other words, something must always happen.

Gravastars were never really viable. I would say that they were a solution in search of a problem, but they were never much of a solution, either. The “problem” that they were supposed to solve was that black holes have far more entropy than the stars that they form from, but then, one would expect that to be the case. And the mechanism by which gravastars are supposed to form depends upon a great deal of magic.