Entropy - Why can't it decrease

I’ve been reading a pop-sci book and it’s gotten me thinking about entropy and the second law of thermodynamics. One thing I’ve never fully understood why it is said that entropy can never decrease for a closed system. My understanding of entropy is that entropy is the number of microstates that produce a certain macrostate. IE A system where lots of microstates produce the same macrostate is high in entropy where as a system with only a few microstates that can produce that macrostate is low entropy.

The second law of thermodynamics states that systems will tend towards higher entropy states via random processies. The processies individually do not prefer any states and essentially just act as random perturbation to the system. So if we take a low entropy state and apply a random perturbation to it, it is more likely to end up in a higher entropy state merely because there are just more configurations that result in a higher entropy state.

The example that comes to mind is keeping neatly wrapped up headphones in your pocket. When you first put headphones in your pocket nicely coiled up they are in a low entropy state because for all the configurations of the cable only a small number are nicely coiled up. Much more states are tangled and a mess. So when it starts in the low entropy coiled state and you walking or jostling it applies random changes to the cord and the cord will essentially select a new random state. Since there more knotted states than non-knotted you would expect the headphones to have a higher likelihood of coming out knotted.

So to get to the part I don’t understand. It appears to me that everything here is probabilistic. There is nothing preventing a knotted headphone cable from randomly forming into a nicely coiled state. It’s just very unlikely. And it’s more unlikely that it would continue to stay in that state. But it’s not impossible. It could just stay in that state forever. (However unlikely).

So what gives? It seems like entropy can decrease, it’s just unlikely to do so. I don’t expect to collect my Nobel Prize for shattering all of physics any time soon. So there must be something I’m missing here.

I know I should let Chronos handle this but here goes. I’m not sure if a coiled cord is a good example of entropy or not. But it took energy to make it coiled and that energy came from you. And if somehow it started out messy and ended up coiled by walking around, you are still adding energy by walking.

And your energy came from eating beef and grain producing heat and energy and that beef and grain needed tractors to plow and harvest, machinery to process and refrigerators to cool it, trucks to deliver it, on and on. And all that accumulated energy is now GONE. Dumped into a cord that you just had to have neatly coiled. All that resulted in a huge increase in entropy vs the small decrease from coiling it.

But I bet the low entropy state of the cord is laying in a straight line rather than coiled. After all, it is probably trying to straighten itself, you are forcing it to be coiled. OK, now someone who really knows the answer can come along.

Dennis

No, you aren’t missing anything. It is possible, just very, very, very unlikely, that enough random perturbations in a system will lead to a lower entropy state.

Another thing to keep in mind is the number of interactions involved. It is possible to flip a coin ten times and get ten heads in a row. It is also possible, but much less likely, to get a thousand heads in a row. It is also possible to flip a coin once a second, every second, since the Big Bang, and have it come up heads every time. But not very likely.

Regards,
Shodan

You can have a closed system (i.e. no heat or work moving across system boundary) with a fixed amount of energy, and it could exist in any of a number of different entropy levels. Example: a hot brick next to a cold brick, no heat exchange with the rest of the universe. With hot brick hot and cold brick cold, this is a low-entropy condition; you could extract mechanical work from this configuration using a heat engine. But if you wait a while the hot brick will warm the cold brick until they are at the same temperature. This is a high-entropy condition, and you cannot extract useful mechanical work from it with a heat engine.

So now use this isothermal pair of bricks to pose the OP’s question: what prevents a particular random occurrence of atomic collisions from resulting in one block spontaneously becoming hotter than the other, resulting in a reduction of entropy in this closed system and once again allowing a heat engine to produce work?

Nothing prevents it; it could happen. It might take a few billion years, but it could happen. It is also possible that both bricks will randomly heat up to the same temperature, so that no work could be produced. It is also possible that it never happens until the heat death of the universe.

I have even heard theories that, after the heat death of the universe, everything sits around until things randomly fluctuate back into a low-entropy state and the Big Bang happens again.

Regards,
Shodan

The energy is not going anywhere. Energy is conserved.

Entropy tends to increase in closed systems because there are more configurations of states that have higher homogeneity (reduced gradients).

I’ve given my answer the last 2 times the question arose. This time, I’ll just mention Isaac Asimov’s answer, which can be found on the 'Net by Googling “The Last Question by Isaac Asimov.”

I feel like maybe I’m misinterpreting something in the 2nd law of thermodynamics(or maybe there is some semantic thing I’m missing). The first line of the wiki on the 2nd law of thermodynamics is “The second law of thermodynamics states that the total entropy of an isolated system can never decrease over time…”. That seems like it’s incorrect. It would appear that entropy can decrease. It’s insanely unlikely but there is an enormous difference between insanely unlikely and not possible. I feel like when you read about decreasing entropy it is seen as something like exceeding the speed of light in that it’s completely impossible. IE Total entropy monotonically increases.

Maybe the interpretation is that if you’re in the system you cannot force it into a lower entropy state. But that lower entropy state could come about randomly.

For a coiled headphone cable three verys might be a marginally forgivable understatement of how unlikely that completely recoiling might occur. For one glass of ice resulting from room temperature water, you need a whole lot more verys. Even Carl Sagan missquotes over all of recorded uses of the word billions in his lifetime aren’t enough verys for how unlikely it would be. The glass of water is a very very very very very very very very very very very small fraction of the observed universe at any given picosecond. Figure out how many verys you need for the whole universe. Don’t try the math at home.

Underline mine. The entropy can decrease in a short period, but give the system a bit longer and entropy will increase again. Your confusion seems to me to stem from missing the underlined bit: the 2nd law does not state that entropy can never decrease at all, but that given enough time it will not decrease (it doesn’t even say that it will always increase: a given system will have an entropy maximum).

Insufficient data for meaningful answer?

(My emphasis added.)

Douglas Adams may be relevant here:

Why is it restricted to short time scales? What forces entropy to eventually increase on long time scales but not short?. From my understanding entropy increasing is probabilistic. So if we start in some low entropy state there is a very high likelihood that it will transition into a higher entropy state. But there is a very small chance that it just stays in that low entropy state. Or it even could transition into a lower entropy state. I don’t see any reason it couldn’t do this indefinitely. As stated before the probability of that happening is indescribably unlikely but it’s not impossible.

I kind of feel like the second law is really ‘Entropy will probably increase’. Maybe I’m just nitpicking but the law has so much weight in physics it feels a little odd for it to have such a weak point to make(IE improbable vs impossible).

Stackexchange got the same question: Is there any proof for the 2nd law of thermodynamics?

The short answer is: it’s complicated and depends on some technical assumptions. You should click on the link there about Boltzmann’s H Theorem. But the discussion is mostly in English.

First I want to comment on the heat death of the universe. It is asymptotic. The amount of free energy (energy available for work) decreases but never hits 0. One consequence is that anything that can happen now could happen at any future time; it just takes much longer as the free energy declines.

My second point is that infinity is a loooong time and that anything that can happen in a finite time will almost certainly happen eventually. Including rolling a googol of heads in a row. In particular a big bang producing a system of minimum entropy.

I think you do not quite grasp the magnitude of the probabilities involved. Like, is there anything keeping you from tossing heads on a fair coin 10[sup]10[sup]10[/sup][/sup] times in a row? No? Then try it, and see if mysterious physical forces keep you from making progress “indefinitely”.

The difference between “short” and “long” here is largely the difference between “finite” and “infinite”. Like, you could argue that the existence of life and a climate on this planet is essentially a big low-entropy pocket, one that’s lasted billions of years, but ultimately a pocket on an infinite time scale.

Now, for every second you add onto your calculation, the chance of it being lower entropy than the starting point is much smaller, so “short” vs “long” is also some sense true there given the magnitude of the probabilities, but here the short/long distinction is largely with regard to infinities, or at best, cosmic time scales.

First, I want to note that you’ve slightly modified your description of the second law here, as opposed to what you put in your first post. Most notably, here you said “isolated system” whereas initially you said “closed system.” It should indeed be “isolated” rather than “closed.” A closed system, while not allowing for the transfer of matter in or out, does allow for the transfer of energy. You can decrease the entropy of a closed system through energy transfer with another system.

Second, maybe don’t think of it so much as “can’t” as, “on all but the most trivial of time scales, it has been observed that entropy of an isolated system increases, to the point that we can take it as a fundamental law of physics, but of course all physical laws are subject to change, just as soon as new evidence, counteracting thousands of years of observation and experimentation, arises to contradict it, at which point we may have to revise the model.”

We have two different hypotheticals floating around here. In an isolated system two bricks could get further apart in temperature, it’s just fantastically unlikely. But an isolated brick or glass of water can’t change temperature by itself, because that would create or destroy temperature, so that’s actually impossible.

We could not even define the temperature of a brick or glass of water if things did not work the way they do with objects reasonably quickly coming to thermal equilibrium.

ps there may be a typo where my stack of 10’s posted above has one extra layer; a glass of water has approximately, let’s say 10[sup]25[/sup] molecules. But the conclusion stands: once it is at equilibrium, don’t hold your breath watching for time to run backwards and entropy to spontaneously reverse itself.