Even if 2nd law reversals happen, it’s still not possible to use these to do work, because even if you had some mechanism that could supposedly exploit such reversals, it’s equally possible that random fluctuations could undo any temporary gains. It’s sort of like blackjack: you can get a small temporary reversal against the dealer but in the long run you always lose.
The law would go something like this: “There is an equal probability that you will draw any particular marble from an urn.” I think the generalized law is the law of total probability, combined with a discrete random distribution. This in turn depends on the marbles being picked randomly.
~Max
Unless the system starts in a high entropy state, then randomly jumps to a low entropy state, and by purely random chance stays in the low entropy state forever.
The laws of probability are only useful in predicting to an extent - if the only constraint is probability, it is theoretically possible for every coin toss to be tails forever.
~Max
Here’s what I don’t understand. Is every microstate equiprobable at every moment? Is the probability of a microstate at time t constrained by the microstate at time t-1?
If so, let’s say the billiard table starts out with an initial microstate that would not classically lead to periodic motion. The distribution of billiard balls is somewhat homogenous. Step forward in time one moment. Surely there is a chance that the arrangement of billiard balls is now a special microstate which would classically lead to periodic motion. Even if this isn’t the state now, at some point you would expect the arrangement of billiard balls to reach such a special microstate. The reverse holds true - one moment the system can have a special microstate and the next moment it might not. This is where I find the disconnect between statistical mechanics and causality. The effect is that all the “dynamic” laws of physics mentioned by septimus are reduced to heuristics. There is a chance, however miniscule, that the eight ball will randomly jump to the opposite side of the table faster than the classical speed of light in a vacuum.
If not, then only the initial microstate of the table is random, and then only for the purposes of the hypothetical. From there on out everything follows classical laws and the second law of thermodynamics as classically defined is never violated. We would still use statistical mechanics to describe systems where the internal state cannot be determined, but that is just a heuristic. The underlying reality would be classical.
~Max
Yes, the classical billiards studied by Boltzmann exhibits Poincaré recurrence, which may seem surprising but goes to show one needs to be careful how to formulate and interpret the second law of thermodynamics for a finite number of particles, as discussed above, and how it technically arises as in Boltzmann’s eta-theorem.
Right. So if I present you with two urns, each having five balls of different color inside, and ask you to pick from both; and you find that, picking from one, you get colors in proportion to their rate of occurrence, each 1/5 of the time, and from the other, you don’t, but, say, obtain red 50% of the time, that wouldn’t strike you as odd?
Fully, of course. These are deterministic systems: the microstate at t-1 completely fixes the microstate at t (and, in fact, at all future and past times, if the system just evolves in isolation).
The probabilities in statistical mechanics stem from ignorance. We don’t have full knowledge of the microstate, and thus, we don’t have full control over it. Setting a large enough system up, always means drawing a certain microstate from a distribution consistent with the macroscopic variables we can control. And, for virtually all (all but an impossibly tiny subset) of these initial states, if we wait long enough, we’re gonna see entropy dipping down. As has in fact happened in the laboratory.
If you throw a die, it’ll end up showing either number randomly with probability 1/6. If you trust that’s what’s going to happen, you should not have any trouble accepting the equiprobability of microstates—because it’s exactly that. If the die fails to behave that way, at least in the limit of large numbers of throws, you’ll call it unfair—you’ll assume it has been tampered with in some way. That you’re justified in concluding this is exactly because anything else than equiprobability would require some special intervention.
No. If it’s in a periodic microstate at t, it was in a periodic microstate at t-1, because the state at t-1 completely dictates the state at t. (Of course, as DPRK rightly points out, for any finite system, there is always at least Poincaré periodicity, but the sorts of periodic movement relevant here are cycles on much shorter timescales that don’t visit every possible microstate.)
No. For almost any microstate of the system, classical, deterministic evolution of its constituents will lead to lower-entropy stats after having visited higher-entropy states. In the limit of continuous variables, this happens with probability 1 (i. e. the microstates for which this isn’t true form a measure zero subset of all microstates you could set up initially).
This doesn’t follow.
The reason we can’t use entropy reversals to do useful work is because the chance of a thimbleful of water spontaneously developing a 1c temperature gradient is probably such that you’d need to wait many times the universe’s current age to get even odds (I haven’t done the math though)
But were such a fluctuation to happen, there’s no reason it needs to behave any differently to any other heterogenous material at that point.
No reason why the system should “know” it needs to leap back to some former state, and so no likely reason you couldn’t do work.
Sent from my Redmi 5A using Tapatalk
And this sort of thing, of course, routinely happens in the right circumstances. As per the above-cited wiki article on the fluctuation theorem:
:eek: The claim about mitochondria seems amazing, though I don’t really understand it. The only citation in that Wikipedia section is to this 2006 paper. I hope someone above my paygrade will determine if that paper even makes the claim the above-quoted Wiki summary shows.
I searched the text, and the word ‘mitochondria’ doesn’t appear. That’s all the effort I’m really willing to expend on that…
On another note: Max S., what do you make of papers like this one?
Do you think the experimental methodology was suspect? Are they misinterpreting their results?
I haven’t looked at the paper, but my 20€ are on “improperly isolated systems”.
So how do I collect?
First you begin by proving that the systems are properly defined and truly separated from any contact with the rest of the universe, then we talk ISBNs.
That experiment purports to demonstrate in an experiment the applicability of the Fluctuation Theorem, which is — like Boltzmann’s theorem — a mathematical theorem relying on certain assumptions. It is relevant to the entropy production rate of certain classical dynamical systems, which is actually interesting to put to experimental test because, we should not forget, all this mathematical discussion of idealized billiard balls and similar is supposed to have measurable and testable implications for real physical systems, like the Second Law of thermodynamics, the subject of this thread.
Back to the Fluctuation Theorem, it implies that in a system (satisfying certain assumptions…) that is not at equilibrium, so its entropy could either increase or decrease(?!), the ratio between the probability of positive and negative entropy production over any interval of time. In particular, it could happen that (only at all likely in a small volume and over a short interval) that the observed entropy production is negative. However, note that the average entropy production, no matter how small the volume or how short the observation time, is always positive. In the authors’ own words, “the [Fluctuation Theorem]… is completely consistent with the Second Law of Thermodynamics, and it can be considered a generalization of this Law to small systems observed for short periods of time.”
So, while one should of course critically examine the experimental setup (which is described in the paper, involving 100 latex particles in a 1.0 ml glass cell full of water), the reported results are completely consistent with the mathematical model, precisely the same type of mathematical model considered and used by Boltzmann and others to derive the Second Law of thermodynamics (which, again, is in fact observed in real life; if it be “routinely violated” then I have overlooked something in this discussion).
Before coughing up the dough, the system consists of latex powder in water in a glass cell being zapped with a laser and observed through a microscope. In their computer simulation, particles along the walls act as a momentum and heat source/sink. So it depends what you mean by “isolated”… (cf different possible “ensembles” in statistical mechanics)
Well, no. It’s a published, peer reviewed scientific study in one of the highest regarded journals in physics; if you want to claim it’s wrong, the burden of proof lies squarely on your end. I don’t really think you could want it to be acceptable to claim, confronted with such a study, that it’s just wrong until proven right.
Besides, it’s ‘just’ an experimental confirmation of a well-studied theoretical result, which came out just as theory predicted. To have either the theory wrong, and the experiment as well, just in the right way to ‘confirm’ the theoretical prediction, or to have the experiment mistakenly show the predicted theoretical effect, would both be considerably less parsimonious explanations than just accepting that the experiment simply observed what it was set up to observe (and again, to assign validity to such rebuttals against scientific work would likewise requite assigning validity to all manner of creationist, climate skeptic, and other quackery).
DPRK, I think we have an issue of terminology here, so let’s try to sort that out. The story I was taught is as follows. In studying the theoretical capacity of heat engines, people like Carnot and Clausius came up with certain laws bounding their efficiency, like the second law of thermodynamics. This can be stated in the form ‘for an isolated system, entropy can never decrease’. This was mainly based on empirical evidence.
Later on, Boltzmann, Gibbs, Maxwell and so on formulated kinetic theory, which was able to derive the earlier laws as statistical predictions, valid on average, but not, as was thought previously, inviolable. Entropy can, in fact, spontaneously decrease. This, to me, is all that is meant by a violation of the second law—it was thought to be an exact, universally applicable law, but turned out to be merely statistical. (Which, perhaps counterintuitively, is exactly what makes it so—virtually—inviolable: I can imagine universes where things move faster than light, where there’s no quantum theory, no weak or strong force, or what have you; but I can’t imagine one in which there’s no second law, because fundamentally, it just boils down to the fact that more likely things happen more often.)
I think you agree with me that entropy, even in a closed system, may spontaneously decrease, solely by virtue of the effectively random microdynamics. However, you don’t want to call that a violation of the second law. Hence, I repeat my earlier question: do you appeal to some different formulation of the second law in that case, one that refers perhaps to the average entropy production?
I just went back and checked—the textbook from which I was taught statistical mechanics in fact does refer to downward fluctuations of entropy as violations of the second law, so if I’m somehow not up on current parlance there, I at least have an excuse.
Is there some variation among textbooks, then? For instance, in Landau & Lifshitz (hardly current parlance!) we read:
Note that we are considering macroscopic bodies/systems, not individual particles. Now, even if very improbable downward fluctuations of entropy are to be regarded as a violation of the second law, an important point is that “in practice… in Nature” nobody is ever going to observe such a violation. It wouldn’t be much of a Law if they did.
Goodness, my “physics” textbook was some generic mid-2000s Prentice Hall textbook for physical science with pictures of balloons on the front. I don’t think it mentioned thermodynamics at all, except how to convert centigrade to Fahrenheit.
~Max
Odd? Very. Impossible? No. I’ve always been hesitant assigning absolute or near-absolute certainty to probability theory. It’s like an educated guess.
You can’t build absolute laws on probability and at least three formulations of the second law of thermodynamics are absolute laws: the three in the original post and the entropy corollary, using the classical definition of entropy. I have yet to be convinced that any of these laws are contradicted by theory or observation.
The only law that might be violated is the entropy version, using the statistical definition of entropy. But it would seem the original proof of that formulation relied on both Clausius’s version of the second law and a classical definition of entropy. So I’m not sure the statistical-entropy version of the second law was ever a “law” to begin with.
~Max