Is the second law of thermodynamics routinely violated?

First, the equilibrium evolution you presented satisfies your request and contradicts step 8. I must also point out that the two statements I quoted are not equivalent - I can create an example that contradicts step 8 while entropy increases do not overall occur more often than entropy decreases. And again I must object to the verbage “occur more often” because we are only talking about a single time step, not prolonged observation of a system. Continual observation of a system will necessarily show that entropy is periodic no matter the particular microscopic evolution.



+ -> + -> + -> + ->

+ -> * ->

* -> x -> * -> x ->


I’ve added an extra microstate consistent with the low-entropy macrostate. I have also split the evolution into three cycles (three lines in the code box). At the end of the cycle it starts anew at the first microstate on the line. You can see that there are 5 microstates consistent with a high-entropy macrostate, 3 microstates consistent with a intermediate-entropy macrostate, and 2 microstates consistent with a low-entropy macrostate. Thus the system complies with step 7.

If we were sure the system is currently in an intermediate-entropy macrostate then we have a 2/3 chance of observing a decrease in entropy at the next instant. There is only a 1/3 chance of observing an increase in entropy at the next instant. Higher entropy means more homogeony, and so it would seem this particular evolution directly contradicts step 8 above.

Not that I’m making the connection between step 8 and step 9.

~Max

And since the further steps are just the same as the first step, with the same evolution law, it holds for those, as well.

Agreed. If there is no way to either go to a higher- or lower-entropy state—if nothing ever changes, in other words—then there are indeed not more ways to go to a higher-entropy state.

But that doesn’t impact the overall conclusion, since the second law still holds if nothing changes. I could have formulated that more stringently, but I am still hoping that these examples are accepted in the spirit they’re given.

After all, while you can of course come up with all sorts of contrivances such that a box full of white and black balls doesn’t mix when it’s shaken—perhaps the balls are glued into place, or are magnetic, or maybe weigh a ton each—the intent, and the conclusion it leads to, isn’t really threatened by such nitpicking: it’s perfectly clear, if I hand you a box, with white balls on the one side, and black balls on the other, and you shake it, the balls will mix.

‘Occur more often’ here means in repetitions of the same experiment: i. e. we set the system up in the same macrostate, and observe what happens.

It’s irrelevant, but the same holds true for prolonged observation, as well: after all, we can take each further step as the first one of a new experiment.

Sure, but only if we could observe the system for arbitrary lengths of time. That, however, we can’t, being finite beings and all. And for any even slightly macroscopic system, the length of time we’d need to observe it in order to have even a tiny chance at observing an entropy reversal far exceeds the age of the universe.

Sure. For these tiny systems, you can set up certain pathological cases. As I pointed out earlier myself:

But again, this doesn’t impact the basic point. For a larger system, there will be more than twice as many intermediate-entropy states than there are low-entropy states; for the coins, for example, there are (100 choose 1) = 100 next-to-minimal entropy states, and (100 choose 2) = 4950 next higher entropy states, and so on.

And yes, even for macroscopic systems, the difference between the number of maximum-entropy states (microstates corresponding to, I should always say) and almost-maximum entropy states will not be so great. So indeed, there will be fluctuations away from equilibrium, which will typically be so small as to be undetectable.

But I’m not aiming at mathematical proof here. That’s readily available in a multitude of textbooks. I want to make it intuitively clear to you that the reason all the gas in a room doesn’t bunch up in the left corner isn’t because there’s a law that states that ‘gas atoms must always tend to the greatest possible dispersal’—which would introduce a weird form of teleology and downward-causation—but rather, because there’s so many more ways to be evenly distributed rather than bunched up, and hence, any change to a bunched-up system is much more likely to lead to a less bunched-up system than the other way around.

A teapot, once shattered, will not reassemble itself, no matter how much you shake the parts—even though that is theoretically possible, and one could contrive laws of physics that privilege ‘teapotness’ as a state of certain assemblies in order to have it spontaneously reassemble.

But note that even in your example, for the majority of (macro)states—‘on average’ in a suitable sense—the second law will hold; and for the majority of evolutions—again, ‘on average’—it will hold for all states.

Also note that the violation will be short-lived: after a first-step reduction, we will with certainty observe an increase, again.

So, for the typical universe (with typical laws of physics), and for typical states, we will, typically, observe an entropy increase for large enough systems. Claiming that it’s not so needs at least an argument as to why our universe shouldn’t be typical.

A more sophisticated treatment of this sort of thing would go into the notion of ergodicity, and when and how it’s justified. I can’t provide that here, not to the level of detail necessary to satisfy your curiosity.

So, if you’re not willing to accept a little waffling on the notion of ‘typical’, and agree that, provided we’re in a typical world, we’ll typically see entropy increase, I’m afraid there’s nothing short of a full course in statistical mechanics that will suffice. There’s only so much you can simplify without becoming flat wrong, and I’ve at least skirted that edge so far; so I’m afraid you’ll have to take one horn of the dilemma—either accept a certain degree of appeal to intuition, like that a box full of black and white marbles will mix if stirred, thus obtaining an intuitive any easily apprehended picture of how the second law works and how it can be violated, or insist on a full-on formal treatment, thus needing the corresponding full-on formalism of statistical mechanics to go along with it.

Try as I might, I don’t think there’s a road through the middle here—I can’t both be perfectly accurate and keep this manageably simple. If that’s what you require, I’m sorry; but I think if you’re willing to work with me just a little, and accept certain obvious, but, I agree, not rigorously proven statements—like that black and white balls tend to mix upon being shaken—I think you can achieve a much better understanding of the second law.

So does that mean you’re retracting your earlier admission of that point?

Thermodynamic entropy is proportional to the logarithm of the total number of states. log(2[sup]N[/sup]) = N log(2) , in any base (this is all up to a constant anyway). The thing to notice is that you have a system composed of independent subsystems, and the entropy of the total system is the sum of the entropies of the subsystems.

I did say you should check the calculation yourself. Use Chernoff’s inequality or something similar. The resulting bounds are more interesting if you take N = 100 or 1000 instead of 2 or 3.

This is all just going through the definition of entropy in terms of complexity (maybe it be more instructive to analyse a non-ideal gas or a solid or 2-D Ising model or something). These microscopic calculations all have to do with explaining how behaviour like heat spontaneously flowing from a hotter body to a colder one, but not the other way around, arises. Some hypothetical long-term recurrence, or lack thereof, doesn’t really have anything to do with it.

I think you are wrong about this. Let’s take the tiny three state system with evolution_4:

If the system starts out in low-entropy macrostate[SUB]L[/SUB], the probability of observing an increase in entropy at the next step is 100%. The probability of observing a net increase in entropy after two steps is 0%. Given a random number of steps n, the probability of observing a net increase in entropy over n steps is 100% if n is odd and 0% if n is even.

For comparison, if the system starts out in high-entropy macrostate[SUB]H[/SUB], the probability of observing a decrease in entropy at the next step is 50%. The probability of observing a net decrease in entropy after two steps is 0%. The probability of observing a net decrease in entropy over n steps is 50% if n is odd and 0% if n is even.

The takeaway is that with this system, the probability of observing a net decrease in entropy over an even number of steps is exactly the same as the probability of observing a net decrease in entropy - 0%. If the number of steps observed is random, there is a 50% chance that net entropy will not change at all.

I’m willing to move past steps 8 and 9 for the sake of argument. After all, you haven’t revealed (or rather, I haven’t considered) your full argument as to how the second law of thermodynamics is regularly violated. I have yet to see how this business about macrostates has any relevance to the second law of thermodynamics.

I think I am being consistent in saying that it is possible for the observer to observe only increases in entropy, and therefore formulate a law to the effect that ‘entropy only increases’ or ‘greyness only increases’. I can take this position without admitting that the observer necessarily or even likely observes only increases in entropy.
I do appreciate your participation in this thread and I am willing to drop my objections to the marbles example. The conclusion from that example was never controversial, and is in fact identical to the conclusion from the coin-flipping example. I only took issue in the steps taken to reach said conclusion.

~Max

Conceded (emphasis mine).

~Max

My point was merely that the laws of physics don’t change for the next ‘hop’ in the evolution, so to speak. So that for sufficiently large systems, with many billions of microstates, you will typically observe a continuous increase of entropy.

For there to be a law, I don’t think ‘possible’ would work. After all, laws aren’t just formulated on the basis of single-shot observations; rather, experiments are repeated to exclude the possibility of chance events.

So (as, really, always in this thread), we’ll have to look at what’s more likely. And here, the possibility that one observes 100 coin throws yielding heads in a row doesn’t impact on the fact that anybody who merely observes aggregated coin throw results will not see such an outcome.

Great! I think we have come a really long way, here. Note how you’ve been opposed to the possibility that a law, formulated at the macroscopic level and assumed to be perfectly valid there, may receive a deeper explanation at the microscopic level, where it becomes clear that the law isn’t, in fact, inviolable. This law concerned a macroscopic quantity—greyness—that doesn’t directly map to the microscopic properties of the system (the marbles are either black or white), but is explained by them.

Now, what remains to do is to convince you that this sort of thing is what actually happened with respect to the second law. That is:
[ol]
[li]Entropy was first assumed to be a macroscopic quantity of a system.[/li][li]Entropy can be explained in terms of the microscopic, by connecting it with the notion of how many microstates there are per macrostate.[/li][li]A formulation of the second law is ‘heat does not spontaneously pass from a colder to a hotter body’.[/li][li]The distribution of heat can be explained by microscopic transport phenomena.[/li][li]A transport of heat from the colder to the hotter body is a transition from a state with more microstates per macrostate to one with fewer.[/li][li]Hence, the second law may be violated if the system evolves against likelihood, from a higher-entropy to a lower-entropy state.[/li][/ol]

Do you agree that that’s roughly what needs to be established?

If the next goal is to admit that the second law of thermodynamics can be violated, conceding each of those points would satisfy the goal.

We can start at the top, point #1. I think I agree, but it depends on what you mean by “a macroscopic quantity of a system”. If you mean that my definition of entropy makes it a property of the whole system, we are in agreement.

~Max

I mean a quantity that’s a function of the macrostate of the system; that is, a function of state variables like temperature, volume, and pressure, as opposed to the microscopic variables like (generalized) positions and momenta of individual particles.

That means, in this definition, it’s only meaningful in the context where these quantities are meaningful. Temperature, for example, is just an average property, only applicable to the macroscopic world. Entropy, thus, is likewise; and, since we can define temperature in terms of microscopic quantities, we can do so for entropy, as well.

Very well. But how do you explain #2?

What does the count of an isolated system’s microstates have to do with entropy? If the system is isolated there can not be any [DEL]change in[/DEL] heat flow external to the system δQ. I will admit that heat and temperature can be a function over the microstate of a system, for example the count of particles and sum of particle velocities respectively. But that doesn’t involve the count of microstates and due to the first law of thermodynamics it won’t change over time. Besides, if δQ is zero, it doesn’t matter what value T has because the change in entropy will still be zero.

~Max

I’m not completely sure, but I think maybe you’re getting hung up on the definition of a ‘system’. But ultimately, a ‘system’ just is what you consider to be a system. I can think of two volumes of gas as two systems, or as a single one. Just consider the classic thermodynamic case of a box with a partition that can be inserted and removed frictionlessly, i. e. without performing any work. As long as the partition is present, we will have two isolated systems; once it is removed, we can think of it either as one system, or as two, no longer isolated from each other.

Now suppose that the gas volumes have different temperatures. Let’s say, volume A is hotter, volume B is colder. Then, once you’ve removed the partition, heat will (tend to) flow from A to B, until both systems are in equilibrium with one another. This is the second law from one perspective.

Alternatively, we can think of the gas after the partition has been removed as one system in a state that’s far from equilibrium—with one side of it hotter, the other colder. The move towards equilibrium is one towards a more homogeneous, more ‘disordered’ state—a move of the sort we’ve by now examined copiously: towards higher entropy in the sense of transitioning to a more likely state.

The fact that the entropy of the total system can’t decrease yields the fact that no heat can be transferred from the colder system to the hotter one. If an amount of heat dQ flows into B, its entropy will be increased by, as you know, dS[sub]B[/sub] = dQ/T[sub]B[/sub]. Likewise, via an amount of heat dQ flowing out of A, the entropy is decreased, by dS[sub]A[/sub] = -dQ/T[sub]A[/sub].

The total change in entropy of the combined system is then:

dS = dS[sub]B[/sub] + dS[sub]A[/sub] = dQ/T[sub]B[/sub] - dQ/T[sub]A[/sub]

Now, for T[sub]A[/sub] = T[sub]B[/sub], that’s zero; but in that case, also no heat can get transferred. If T[sub]A[/sub] > T[sub]B[/sub], we have a positive dS, and likewise, heat flowing from the hotter to the colder system.

If, however, T[sub]A[/sub] < T[sub]B[/sub], dS would be negative, and heat would flow from the hotter to the colder system. Consequently, if heat flows from the colder to the hotter system, the total entropy of the combined system decreases, and vice versa—if heat flows from the hotter to the colder system, total entropy increases.

Thus, we can state the second law in two equivalent ways:
[ol]
[li]Heat only flows from a hotter (A) to a colder system (B)[/li][li]The total entropy of a system (A + B) never decreases[/li][/ol]

The second formulation of the second law is the one we will connect with the notion of microstates. The thing to realize here is simply that heat is like greyness: there are more microstates corresponding to states of equally distributed heat than there are corresponding to states of inhomogeneous heat distribution.

For the moment, I’ll leave you merely with the intuitive basis of that claim: there are, simply put, more ways of having ‘fast’ and ‘slow’ gas atoms (black and white marbles) evenly distributed in the box (leading to an overall intermediate average speed—greyness), than there are ways of having all the ‘fast’ atoms in part A and all the ‘slow’ ones in part B. Thus, starting out with such an ‘uneven’ distribution, we would expect, given the prior discourse, to observe, typically, an evening out of the distribution of heat—which entails, thus, heat flowing from the hotter to the colder part, purely on the same basis as in the marble-example, where ‘darkness’ flows from the blacker part to the lighter one, to yield an overall grey result.

I’m going to stop here for the moment, though, because experience teaches me you’re not going to accept this simply at face value (on the other hand, if you feel inclined to do so, please don’t feel obliged to further your inquiry just for my sake!). So let’s see what we bump into now.

Hold on, that’s impossible. You can’t insert or remove a partition without performing any work.

This I agree with, notwithstanding the above objection.

Perhaps that is your perspective of the second law of thermodynamics, but my perspective is that the temperature of A and B will forever fluctuate and will never reach thermodynamic equilibrium. The temperature of the composite system AUB on the other hand never changes. But that is the first law of thermodynamics, not the second.

[quote=“Half_Man_Half_Wit, post:250, topic:833151”]

Alternatively, we can think of the gas after the partition has been removed as one system in a state that’s far from equilibrium—with one side of it hotter, the other colder. The move towards equilibrium is one towards a more homogeneous, more ‘disordered’ state—a move of the sort we’ve by now examined copiously: towards higher entropy in the sense of transitioning to a more likely state.

The fact that the entropy of the total system can’t decrease yields the fact that no heat can be transferred from the colder system to the hotter one. If an amount of heat dQ flows into B, its entropy will be increased by, as you know, dSB = dQ/TB. Likewise, via an amount of heat dQ flowing out of A, the entropy is decreased, by dSA = -dQ/TA.

The total change in entropy of the combined system is then:

dS = dSB + dSA = dQ/TB - dQ/TA

Now, for TA = TB, that’s zero; but in that case, also no heat can get transferred. If TA > TB, we have a positive dS, and likewise, heat flowing from the hotter to the colder system.

If, however, TA < TB, dS would be negative, and heat would flow from the hotter to the colder system. Consequently, if heat flows from the colder to the hotter system, the total entropy of the combined system decreases, and vice versa—if heat flows from the hotter to the colder system, total entropy increases.

Thus, we can state the second law in two equivalent ways:
[ol][li]Heat only flows from a hotter (A) to a colder system (B)[/li][li]The total entropy of a system (A + B) never decreases[/ol][/li][/QUOTE]

All agreed, except perhaps that work can induce a net heat flow from a colder body to a warmer body, and that you should have written “an isolated system” under #2.

Conceded.

Also conceded.

And… you lost me. :frowning:

~Max

If some combined system A ∪ B is not at equilibrium, then it does not have a well-defined temperature except perhaps locally. If you let a cold gas and a warm gas combine, then they will indeed irreversibly mix, and even if you keep them separated by a partition that only allows heat flow they will still come to thermal equilibrium, resulting in a net increase in entropy. Any infinitesimal “fluctuations” or hypothetical Poincare recurrence are irrelevant to this very classical picture of equilibrium thermodynamics.

I said ‘frictionlessly’. Granted, it’s an idealization, but it’s a common one in thermodynamics thought experiments (most variations on Maxwell’s demon feature it, for example). It’s simply a way of saying that the internal energy isn’t changed by the process.

As DPRK has already pointed out, if the system isn’t in equilibrium, it doesn’t have a well-defined temperature. So saying of a ‘forever fluctuating’ system that its temperature never changes is difficult to interpret.

It’s also in tension with the second law. Equilibrium is that state in which no energy- or mass-transport happens internally in the system; that’s a state in which you can’t take heat from one part of the system, and move it to another. For that to be the case, the temperature in each part must be the same, and entropy can’t be further increased.

OK. So you agree with the following:

[ul]
[li]The evolution of a system, typically, tends toward macrostates realized by a higher number of microstates (the conclusion of the marbles-example).[/li][li]There are more microstates realizing an even than an uneven distribution of heat.[/li][/ul]

But you disagree with:

[ul]
[li]The evolution of a system will, typically, tend towards macrostates corresponding to a more even distribution of heat.[/li][/ul]

To me, that seems to follow naturally from the former two, so I think I’ll have to ask you to elaborate where you see tension.

I guess you are right, unless we define temperature as a function of (theoretically) countable microscopic variables. I should have said the average temperature of the system, that is, (T[SUB]A[/SUB]+T[SUB]B[/SUB])/2, does not change.

What makes you think the particles in an isolated system will irreversibly mix or that two halves of a system must come to permanent thermal equilibrium, and why doesn’t Poincaré recurrence disprove that assertion?

~Max

You still need work to move the partition, even if there is no friction, because at some point a force caused the displacement of the partition. To state otherwise is to imply a violation of Newton’s first law of motion.

That is not what I meant by thermodynamic equilibrium. By your definition the only systems in thermodynamic equilibrium have a temperature of absolute zero. I mean thermodynamic equilibrium with the surrounding systems - no net flow of energy or mass external to the system.

That was not the conclusion I made from the marbles example. This was:

~Max

The specific issue I take is not here, but in the next step, where you say this third statement is equivalent to the second law. The second law of thermodynamics does not say or imply that a system will typically tend towards macrostates corresponding to a more even distribution of heat.

~Max

I just noticed this statement, which is false. Consider this system of inequalities:

0 ≤ Ta < Tb
0 < Q
0 < Q/Tb - Q/Ta

Let’s plug in Ta = 1, Tb = 2, and Q = 1.

0 ≤ 1 < 2
0 < 1
0 < 1/2 - 1/1
0 < -1/2

Uh oh!

~Max

Again, it’s a common idealization. We need not be detracted by that; what it means is that no work is done on the volume of gas.

No. A system, on my definition (i. e. that of thermodynamics), is in equilibrium if there are no internal transport phenomena. This is precisely the case if it has a homogeneous temperature, which typically won’t be zero.

This includes your definition. If two bodies, A and B, are in equilibrium with each other, then the combination, (A + B), is in equilibrium as I have put it.

It would be tremendously helpful if you could state where you believe there is a variance. To me, these:

[ul]
[li]There are more ways of realizing a state that’s pretty uniformly grey, than there are to realize a state that’s (say) all white in box A, and all black in box B.[/li][li]Consequently, there are more ways to go from a state that’s slightly inhomogeneous to one that’s more homogeneous, than there are ways to go to a state that’s even more inhomogeneous.[/li][li]To any observer who, as before, is only capable of seeing gross colors, the formerly black-and-white separated box will gradually tend to a shade of even gray.[/li][li]That observer might formulate a law, stating that black and white, if brought into contact, eventually even out to a uniform gray.[/li][/ul]

Straightforwardly imply this:

[ul]
[li]The evolution of a system, typically, tends toward macrostates realized by a higher number of microstates[/li][/ul]

Otherwise, why should an observer observe, typically, an increase of greyness, if the system doesn’t, typically, tend towards macrostates realized by a higher number of microstates (which are the ‘more grey’ ones)?

But anyway, this doesn’t seem to be the real issue.

No, but it does say that heat only flows from a hotter to a colder body. Connecting these was the point of the two volumes of gas: if heat flows from the hotter volume to the colder one, then the entropy of the combined system increases; the entropy of the combined system increases, because it evolves towards a state realized by a greater number of macrostates (a ‘more even’ distribution of heat). So if (A + B) has a positive change in entropy, then that’s because heat has flowed to the colder one.

In that example, since T[sub]A[/sub] is the colder one, heat has flowed from the colder to the hotter system (since, according to your equation, A looses heat, and B receives heat). The equation for A gaining heat Q, and B loosing heat Q, is

dS = dS[sub]A[/sub] + dS[sub]B[/sub] = dQ/T[sub]A[/sub] - dQ/T[sub]B[/sub],

which as you can see will come out positive.

It does not include my definition, my definition of equilibrium includes yours. A system with inhomogeneous internal temperature can still be in equilibrium with its surroundings.

We stepped over this issue just yesterday, and it still seems unnecessary to resolve this disagreement between us.

Entropy only changes under your definition. Under my definition the entropy of an isolated system does not change as a consequence of internal heat flow.

I had switched Ta and Tb appropriately but reordering terms makes no difference, it is still a contradiction.

0 ≤ Tb < Ta
0 < Q
0 < Q/Ta - Q/Tb

Let’s plug in Ta = 2, Tb = 1, and Q = 1.

0 ≤ 1 < 2
0 < 1
0 < 1/2 - 1/1
0 < -1/2

Therefore the classical definition of entropy is materially different from the statistical definition of entropy, and the two are not interchangeable.

~Max

That can only ever apply to parts of the universe, though, since the universe as such doesn’t have any surroundings. That weakness doesn’t exist on the thermodynamic definition of equilibrium.

If it’s what makes you unable to see how the increase of entropy entails the flow of heat from hotter to colder objects, it’s exactly central to the issue we’re discussing.

Well then, your definition is just wrong—if A emits a quantity of heat to B, and A is hotter than B, the combined isolated system A + B will suffer a net increase of entropy, as demonstrated above.

The first one says that A is hotter than B, but the third one says that B looses heat, which is gained by A. That this yields negative entropy change isn’t surprising, as it’s a process that violates the second law of thermodynamics.

The thermodynamically allowed process is given by A, being the hotter one, loosing a quantity of -dQ of heat; therefore, its change in entropy must be dS[sub]A[/sub] = -dQ/T[sub]A[/sub]. Conversely, B, the colder system, gains a quantity dQ of heat; leading to an entropy change dS[sub]B[/sub] = -dQ/T[sub]A[/sub]. The combined entropy change is then
dS = dS[sub]A[/sub] + dS[sub]B[/sub] = -dQ/T[sub]A[/sub] + dQ/T[sub]B[/sub]
which, for T[sub]A[/sub] > T[sub]B[/sub] is necessarily greater than zero.