Is there anything that is theoretically impossible but possible in practice?

It’s actually quite common in physics to have a data analysis problem which is, strictly speaking, ill-posed, which means that you have some observational data, and you have some data you want to derive from it, but the date you want to derive from it would have a greater information content than the data you actually have. All such problems are theoretically impossible, since there are many possible solutions. However, in practice, there are often some heuristics that allow you to choose one of those many possible solutions as the “best” (usually simplest, in some sense) one, and thereby get a well-defined solution anyway.

As an example, that non-physicists do every day: Each of your eyes gives you a single, two-dimensional image of your surroundings. A true three-dimensional image would contain much more information than a pair of two-dimensional images, and yet your brain manages to construct a three-dimensional image from them. It does this by making all sorts of assumptions about the nature of the scene you’re seeing: It assumes that certain objects are at roughly similar distances, and that most things in the field of view are moving relatively slowly relative to the eye’s response time, and that certain things are certain familiar shapes, and so on. Now, it’s possible for some of these assumptions to be false, and in that case, the simplest image (the one your brain constructs) isn’t the true one. This is how we get optical illusions. But in practice, optical illusions are fairly rare, so the brain’s reconstruction system really does work, even though it theoretically can’t.

Any theory which says that (you can not take the square root of a negative number) is not talking about the same things as “imaginary” numbers (i.e., rotations) are talking about.

Through the Looking Glass; early draft:

shouldn’t work coz of all those recurring bits

but try it on 90

The OP is referring to theories that don’t hold up in reality. Actually, there are quite a few of them. There’s even a name for them.

They are called “bad theories.”

Altruism?

If a theory doesn’t match reality, there’s something wrong with the theory.

In the case of the problem of isolating and moving individual notes, I don’t see why this would be theoretically impossible. I imagine it works by doing a spectrum analysis of the signal and figuring out which peaks belong to different notes. I can see practical problems - there could be cases where a peak might belong to more than one not, for example. But with my minimal knowledge of digital signal processing this seems theoretically doable to me.

Game theory works pretty well to explain altruism. Look up the “iterated prisoners’ dilemma” sometime.

But, yeah, in general if the theory says something that actually happens is impossible, and someone hasn’t redefined a term behind your back*, the theory is inadequate and has to be modified or replaced.

*(It’s impossible for an unaided human to fly, until someone claims jumping qualifies as ‘flying’.)

You can go further than TSP. Some undecidable problems are in fact quite solvable in most common cases. Higher-order unification is an example of this, for instance. In theory, Huet’s algorithm should behave extremely badly: most general unifiers aren’t guaranteed to exist, and the algorithm isn’t guaranteed to halt. However, empirical evidence suggests that Huet’s algorithm behaves extremely well on the kinds of problems one sees in practice, always halting and generating nice unifiers.

(The reason for this is now known though: something like 99% of all higher-order unification problems that you’d see in practice fall into a restricted class of problems called higher-order patterns, which have a decidable and efficient unification algorithm).

Depends on what you mean by “possible in practice”. There are plenty of examples (and I think the audio mix example you cite falls into that category) of things that are impossible to do in theory, but in practice it’s possible to do something that is not quite the impossible thing, but gets so close to it in its results that, for all practical purposes, you can use the substitute results -knowing fully well that you did just a very close substitute for the impossible thing rather than the impossible thing itself.

A classic example from mathematics is squaring the circle. Doing so under the traditional conditions of geometric constructions (using only compass and straightedge) is impossible, but there are several constructions using only compass and straightedge that yield a square whose area is so close to that of the circle that, for all practical purposes, the method may be used. But the theoretical impossibility of doing the proper thing still remains.

In practice, Achilles can easily outrun a tortoise. In theory, I don’t know that anyone has definitively proven this is the case, given the setup.

The race with the turtoise has been solved theoretically long ago. The paradox stems from the fact that the sum of the series 1/2 + 1/4 + 1/8 + 1/16 + … , even as you continue it infinitely for (1/2)[sup]n[/sup], does not become infinite; in fact, the sum of that series converges into 2, but never exceeds 2. This is what Zeno did not know, but later mathematicians have long been aware of that. There is absolutely nothing theoretically tricky or inexplicable about this paradox.

I’ve worked with the piece of software in question, and it really is pretty mindblowing. It redefines a lot of what we “know” in audio processing.

The quality which this is able to achieve is also really incredible. Anyone on this board (and in meatspace) who says they “hate autotune” has probably no idea that this thing is being used in almost every professional production today, with the listener being none the wiser, precisely because it works so well.

What a remarkable age indeed…

Another example might be some code-breaking. There are examples of code systems that are mathematically impenetrable – that is, in theory they can’t be broken without trying every possible key, which would take millions of years – but can be broken in practice because the actual physical system leaks information (for instance, if you can carefully time the output from a computer that’s doing the encoding, you might be able to tell how many calculations the computer is doing for each bit of output, which could be enough information to narrow down the possible keys to the point where you can decode the message). All depends on the exact physical system used of course.

I think you mean 1, not 2, the way you’ve laid out the series. Usually it starts with the n=0 case, 1, which does sum to 2 over infinity.

You’re right, apologies. The infinity series itself converges into 1, plus the 1 as the start case making two.

Another issue I noticed after posting was that the Achilles and the turtoise race is usually (at least in the versions I’ve heard) told with Achilles being 10 times as fast as the turtoise rather than twice as fast. In this case, the infinity series would be 1/10 + 1/100 + 1/1000 + … (1/10)[sup]n[/sup]. I’m not quite sure what this converges into (I would guess it’s 1/9, but I’m not sure), but it certainly converges into some finite number.

No there is not, including the software in question. If something can be accomplished in practice, then the theory saying it is impossible is either flat-out wrong and must be discarded, or is inadequate and must be updated.

I’m a programmer and I think this is spot on. Software verification is, in theory, impossible beyond trivial cases because of the halting problem, but that doesn’t stop us from coming up with practices like Test Driven Development and having a separate Testing department that is organizationally separated from development activities to reduce the number of “true” errors to within some theoretical threshold/tolerance that is considered acceptable.

Numerical analysis is all about getting easy approximations of things that are much more difficult to compute exactly. If you are building a municipal bridge, the acceptable error threshold may be much more than the error introduced by using numerical analysis approximations.

The three body problem may be another (any physicists care to comment)? This didn’t stop us from going to the moon, since our calculations were close enough and it didn’t matter if we were 5 mm off our landing site because we failed to take into account the miniscule gravitational influence of Mars.

It is theoretically impossible to completely model the universe, but the universe itself does a good job at modeling it.

Yes, 1/9; it’s 0.111111111111111…, which, as you no doubt are familiar, satisfies X = 10X - 1, and thus X = 1/9.

I also like to note that one doesn’t need to know anything about convergence of series, as such, in order to dismiss Zeno’s paradoxes. What was the paradox? That an infinite number of events lay in the finite interval of time between the start of the race and Achilles catching up to the tortoise; i.e., that an interval of finite length can contain infinitely many points. But so what? There is nothing wrong with this; the very example pondered illustrates it to be manifestly possible: the interval [0, 1/9] of finite length 1/9 contains the infinitely many points 0.1, 0.11, 0.111, 0.1111, etc. And myriad other examples of this sort of thing exist, naturally; there was never any reason to suppose that an interval of finite length should only have finitely many points.