Is the second law of thermodynamics routinely violated?

I continue to feel that my claim …
Unlike the Laws of Newton, Maxwell and Einstein, the Second Law isn’t a dynamical law; it’s just a statistical fact, closely akin to the Law of Large Numbers.
… is correct, and should have quiesced this bickering! :slight_smile:

Everything.

(Google to detail the difference between Helmholtz’s and Gibbs’ free energy.)

But I disagree. Why is the second law but a statistical fact? I asserted and continue to assert that there have been no theoretical or observed violations of the second law of thermodynamics, as defined in the three primary sources in the original post. None of those formulations involve entropy, although Clausius in a corollary said entropy can never decrease. But he was using dS=dQ/T, not free energy or microstates.

So far the only violations of the second law have been violations of Clausius’s corollary, using different definitions of entropy. And it seems to me that said definitions of entropy also imply the possibility that the laws of Newton, Einstein, and Maxwell can also be violated, although the probability is vanishingly small.

~Max

No, if you do the coin example experiment a couple hundred of times, you might find the law to hold. It is not even given that the law will probably hold.

I agreed that the law could be formulated but I did not agree that the steps between 5 and 18 of that example were valid, logical steps. I never agreed with your logic past step 5, because you missed a very important premise. See my post on dice logic, [POST=21631856]#150[/POST].

~Max

Very well, so the observer can still distinguish between the color of the left box and the color of the right box after you have removed the wall separating them. I still assume he can only see the solid color of each box - he sees the entire left box as one color, and the entire right box as one color too. In that case, I predict he will likely see a never ending fluctuation of monochromatic colors. It is possible but highly unlikely that he will see a solid color in either box, unchanging over time. More probably, the color of each box will usually be a middle shade of grey, constantly flickering from lighter to darker tones. Eventually both boxes will return to their original black and white, if only for a moment.

~Max

Let’s say we have a number line from negative ten to positive ten, where negative ten is black, zero is grey, and ten is white. On this number line is a point whose initial position is random, and whose position at every time step is random. The absolute value of the point constitutes the entropy of the system you describe.

We can see now that a point on the leftmost position, -10, has a 19/21 probability of becoming “more grey” or “less entropic” at the next step, and a 2/21 probability of remaining at the same distance from grey or of having no change in entropy. So does a point on the rightmost position. Here are the other values (you may need to switch forum themes at the bottom left of the page):


   Position to Entropy    
Pos | Less  | Same |  More
-10 | 19/21 | 2/21 |  0/21
- 9 | 17/21 | 2/21 |  2/21
- 8 | 15/21 | 2/21 |  4/21
- 7 | 13/21 | 2/21 |  6/21
- 6 | 11/21 | 2/21 |  8/21
- 5 |  9/21 | 2/21 | 10/21
- 4 |  7/21 | 2/21 | 12/21
- 3 |  5/21 | 2/21 | 14/21
- 2 |  3/21 | 2/21 | 16/21
- 1 |  1/21 | 2/21 | 18/21
  0 |  0/21 | 1/21 | 20/21
  1 |  1/21 | 2/21 | 18/21
  2 |  3/21 | 2/21 | 16/21
  3 |  5/21 | 2/21 | 14/21
  4 |  7/21 | 2/21 | 12/21
  5 |  9/21 | 2/21 | 10/21
  6 | 11/21 | 2/21 |  8/21
  7 | 13/21 | 2/21 |  6/21
  8 | 15/21 | 2/21 |  4/21
  9 | 17/21 | 2/21 |  2/21
 10 | 19/21 | 2/21 |  0/21

Total all of those probabilities together and we can see that after one instant, there is a 200/441 probability that the system will move to a less entropic state, a 41/441 probability that the entropy will not change, and a 200/441 probability that entropy will increase. In percentages, that’s about 45% for entropy to decrease, 9% to stay the same, and 45% to increase.

And this is after assuming the equiprobability of microstates at each measure of entropy, which I do not wish to assume.

~Max

But the models of statistical mechanics assume stochastic or chaotic behavior, thus the laws of statistical mechanics are actually heuristics, not laws. The second law of thermodynamics, as formulated using the statistical definition of entropy, has been violated by Half Man Half Wit’s own citation. This is not to detract from the utility of statistical mechanics, which is much easier to calculate.

There are three (or four) forms of the second law of thermodynamics which are not heuristic, and these are the ones I was taught in school and cited in the first post. So in order to disprove those laws, you would need to show a contradiction. That’s what this thread is about.

~Max

Not a given in the sense that it’s not a 100% certainty, that I agree with. But it’s effectively certain in the sense that the probability of observing a violation can be made arbitrarily small, with a suitable choice of numbers.

Or, in other words, consider the following. You let a ball drop a hundred (a thousand, a million…) times. It always falls down. From this, you formulate a law: stuff falls down.

Now suppose that whatever deity has created the universe has made it so that the actual law is: stuff falls down, except once every sextillion times, when it just hovers in place.

You’ve got no data to support that the law is actually the latter, and never will observe any. Thus, you still formulate the law as ‘stuff falls down’. You’re completely justified in doing so; however, you happen to be wrong.

That’s the situation we’re in here: whenever we try, we will find greyness increasing with overwhelming probability. That is, in any concrete, reasonably large series of trials, we won’t observe a violation. Thus, we formulate a law to the effect that greyness always increases. This law stands on equal footing with every other physical law: it’s a generalization from finitely many observations.

There’s no missing premise. I have explained how we can say that the probability is 1/6, without having to assume it: because one out of every six possible evolutions of the die from arbitrary initial conditions ends with it showing any given number.

Take a coin. Suppose you can only throw it in two different ways—way A and way B. Way A always lands heads up; way B always lands tails up. You don’t have control over whether you’ve thrown it according to way A or way B (random initial conditions, you remember). Then, the probability that it comes up heads is 50% on each throw. No further assumptions necessary.

Suppose now you can throw the coin in 20 different ways, 10 of which come up heads, 10 of which come up tails. This yields the same conclusion. As does supposing that there are 100, or 1000, and so on different ways. What matters is that from the set of possible initial conditions, half of them yield heads, and half of them yield tails.

Yes, this is basically right. And for a large enough number of marbles, what’ll happen is that the observer will observe a transition to uniform grey in every case they do the experiment, since the likelihood of the colors separating will decrease with the number of marbles. Thus, at some point, if they do the experiment 10, or a hundred, or a 1000, or a billion times, they won’t observe a violation of the law ‘the system tends to greyness’ with any appreciable probability. Agreed?

This is in contradiction to the system as I set it up. The position at every time step is not random; rather, it is determined (exactly and absolutely) by the microstate corresponding to the position of the previous time step.

You assume that entropy gain and loss are equally likely. You get out that entropy gain and loss are equally likely. This isn’t surprising.

Bolding mine. That sentence is an assumption in and of itself, not implied by the rest of the paragraph. You have assumed that the probability of throwing way A or throwing way B is 50%. You are allowed to make that assumption, but you must acknowledge that you made it to complete your argument.

~Max

I think we understood each other, right up until you said “in any concrete, reasonable large series of trials, we won’t observe a violation.” That’s not true. You would have to say we probably won’t observe a violation, and you can drop the rest of that sentence. But this is a minor issue of semantics. The person formulating the law does not think so, and that is why his law is flawed.

~Max

If by ‘the system tends to greyness’ you mean “black and white, if brought into contact, eventually even out to a uniform gray”, I disagree, the probability of a violation is exactly 100%. The colors of the two boxes never “eventually even out to a uniform gray”, and our observer will notice a violation if he is allowed to observe the full period of the system. In the likely case that at least one marble from either box moves into the other box, one or both boxes will always be changing color. There can never be equilibrium at any particular shade of grey. If you assume equal aggregate velocities of marbles in each box, the system should cumulatively spend half of its period widening the color gap and half of its period shrinking the color gap between its two halves.

If by ‘the system tends to greyness’ you mean the disparity in color between the left and right boxes will always decrease over time, again this is disproven with 100% probability so long as the observer has enough time to watch the system.

If by ‘the system tends to greyness’ you mean ‘after removing the wall, each box will usually be some shade of grey as opposed to absolute black or white’, I have no qualms. But that is not a law.

You did not prove to my satisfaction that, in any random configuration of marbles moving around in a box, entropy is more likely to increase than decrease over time. Without considering the effects of our assumptions about entropy gain and loss, mine is on equal footing with yours.

~Max

But we had already settled that issue, I thought:

Are you no longer fine with this? And, presuming that you are, do you agree that then, the probability of the coin comes out to 50% for each possibility? And likewise for the die?

Well, I could drag the extra verbiage through every statement I make, but I think it’s good enough to say ‘we won’t observe any violation’ instead of ‘for any reasonable time scale, the probability of observing a violation is as small as we care to make it, by increasing the system size’. Because the outcome is the same: if you repeat the experiment some reasonable amount of times, you are astronomically unlikely to ever make an observation contradicting the law of increasing greyness, and thus, you will believe it holds.

You’re right to point out that this is, ultimately, wrong; but the observer has no way to know that, without having the microscopic theory of greyness.

Sure. But a full period, for any reasonably macroscopic system, is going to be fantastically huge. So you’re not going to observe anything of the sort.

We’re talking about generalizations made from actually feasible observations. Given this, while there is an astronomically small probability of actually observing a violation of the law of increasing greyness, the far more probable course is going to be that, during the tens or hundreds or thousands of times that the experiment is repeated, no violation is observed, and thus, the law has the status of any other physical law ever formulated.

But not on any observable level. The observer has eyes that aren’t significantly better than a human being’s, so, while, again, you’re right in principle, these deviations are not going to be observed. Remember, laws are formulated based on actual observations. And, with overwhelming probability, any actual observation is going to be the system evolving to an equal grey and staying that way, at least, for sufficiently large numbers of marbles.

That is proven simply by the fact that there are more states of high greyness than there are states of uneven color distribution. For if that’s the case, and the microdynamics is reversible, then, if you take a state of intermediate greyness, there will be more ways (‘more’ meaning here ‘astronomically many more’) to evolve to a state of higher greyness than to evolve to a state of lower greyness. Thus, whenever you find the system in a state of intermediate greyness (i. e. uneven color distribution), then, with astronomic likelihood, the state at the next timestep will be one of higher greyness.

Take my introductory example. Here’s again the distinguishable macrostates, together with the number of their microscopic realizations:

[ul]
li: 6[/li]li: 3[/li]li: 3[/li]li: 3[/li]li: 3[/li]li: 3[/li]li: 3[/li]li: 1[/li]li: 1[/li]li: 1[/li][/ul]

No matter what the microscopic dynamics are, each of the three microstates realizing, say, (A2B1C0) has 6 states corresponding to macrostates of higher entropy it can evolve to, 17 states of equal entropy, but only 3 states of lower entropy. Does that help clear things up?

If you are studying these marbles, once you have an ergodic Markov chain, then it will satisfy the asymptotic equipartition theorem; this is a mathematical result in information theory. You can compare the entropy rate of the forward and time-reversed process and it will satisfy the fluctuation theorem. At this point, you are talking about mathematical theorems, like the Law of Large Numbers, so there really isn’t anything open to interpretation unless one wants to philosophize about it like Rosencrantz and Guildenstern.

I stand by both of my statements. In the bolded statement you claimed the probability of a coin throw is 50% each time. That is not in any way implied by the initial state of the coins. Besides, neither throw A nor throw B take into account the previous state of the coin because both throws always give heads or tails respectively. You haven’t told me how A is chosen vs B, but you assume the probability is 50%.

I think we’re agreed on the subject of phenomenological laws. You haven’t convinced me that the observer is likely to see only increasing greyness, but if that’s all he saw he could very well formulate a law of increasing greyness. I take no issue with the observer’s logic.

If we limit violations to observed violations rather than theoretical violations, I will concede the insurmountable difficulty in falsifying a law if violations are actually so improbable as to make observation unrealistic. But I do not yet concede the improbability of observing decreasing greyness.

If you are to limit the observer’s sight in such a way that he can only distinguish between almost black, almost white, and everything-else-is-grey, and if I was to assume significant enough differences in color for the observer to notice are rare (which I do not concede yet), I would concede that the observer can in fact observe grey over time despite microscopic fluctuations. He will assume the left and right boxes are equally grey not because they are, but because his senses are so dull as to fail to recognize the difference and constant fluctuation of lighter and darker shades in one box then the other. But I have not yet conceded the underlying premise.

No, having more potential states with one property does not imply higher probability for ‘evolution’ towards a state with said property. Simply having more states says nothing about probability. You have not given me a basis for a probability distribution and, quite to the contrary, you denied that the distribution is random. In fact you say the underlying dynamics are deterministic. How do you come to the conclusion that a system in a state of intermediate grayness will probably evolve into a state of higher greyness? It seems like you are pulling a postulate from thin air.

~Max

What is an “ergodic Markov chain”? The internet definitions I found assume some sort of stochastic behavior, whereas neither Half Man Half Wit or I have backed down from the premise that microscopic dynamics are deterministic.

~Max

Huh? Sure it is! If the initial state can be either A or B with 50% probability (equiprobability of initial states, remember!), A always yields heads, and B always yields tails, then each throw yields heads with 50% probability.

Again, that’s just going from what you said: initial states can be chosen equiprobably.

Theories are build on observation, so of course, the latter must be what we start with.

I’m really having trouble believing you’re sincere here. I am not restricting the observer’s vision, or anything. Rather, that he only sees shades of grey, so to speak, is for the same reason as that you only see shades of grey in pictures like the ones here. If the density of black and white pixels were slightly different (say, a black pixel added here and there), you would not observe any difference. If there are some regions with higher density of black pixels, the gray there is darker, if there are regions with whiter pixels, they are brighter.

That’s what our experimenter sees: exactly what you would see.

Of course. The number of possible states to evolve into determines the number of possible evolutions.

Let’s take the smallest possible change of something like the coin system. Take one pixel, and flip its color. Suppose you start with an all-black state. Then, flipping one pixel’s color will make the system more grey with certainty: every other possible state is one with higher ‘greyness’.

Then, take the resulting state, and again, flip one pixel: if you flip any pixel other than the one you’ve flipped before, you will again move towards a state of higher greyness. And so on: there will be more states of higher greyness to flip to, until you’ve reached a 50/50 distribution of black vs. white pixels. If you flip a pixel there, you will get to a state that’s ever so slightly (but undetectably) less grey. The next flip will return you to the equilibrium with a probability of 50% (actually, very slightly more than that, since there is now either one more black or white pixel).

Now, the important thing to realize is that this doesn’t depend on the microdynamics; in particular, it doesn’t assume that this dynamics is random. Rather, this is a conclusion that applies to generic deterministic and reversible dynamics.

Think again back to the original, all-black system: any possible microscopic evolution will lead to a more grey state. After that initial chance, still almost every possible evolution will lead to a more grey state. And so on, up until you reach equilibrium. There, half of all evolution laws will lead to a less grey state; but, even for that half that lead away from equilibrium, most will return quickly to it, with some holding out longer, and only one making it all the way to the all-white state, before going back.

In conclusion, as long as you don’t take care to set up a very special evolution of the system (a point I had stressed from the beginning, you will recall), any generic microdynamics will, at any point in the evolution of the system, tend to increase the overall greyness with overwhelming probability.

If you want to consider the first throw part of the initial state, that’s fine by me. But I still don’t understand how it follows that the second throw has 50% probability. I assume the deterministic process that decides whether to throw A or B, the process we don’t know about, depends on the currently showing side of the coin. Otherwise this whole example of throwing a single coin has no relevance to the point you were trying to make - proving a law that says “the [theoretical] system will always become more grey over time”.

Very well.

But we don’t know that downward fluctuations in entropy are small and fast. They could just as well be large and slow. And besides, I can observe the flickering of an LCD display with any number of optical devices. On older monitors I can pick up the flickering with the naked eye. I have at my disposal a number of tools which can distinguish very slight differences between colors in photographs.

This absolutely depends on microdynamics. Is picking “any pixel other than the one you’ve flipped before” “until you’ve reached a 50/50 distribution of black vs. white pixels” a rule? How do you determine which “other” pixel to flip, is it actually random? After you reached 50/50 distribution, how do you determine which pixel to flip then? Are you implying the flips after that are picked at random? If not, how can you assign a 50% probability of evolving towards equilibrium?

It does not follow that the system will evolve towards a more grey state simply because there are more ways to do so than to do otherwise. There could be 999 ways to evolve to a more grey state and one way to do otherwise, but it does not follow that the system will do so or is even likely to do so. You are missing a premise.

~Max

I was under the impression you were discussing processes that randomly mix up marbles and wanted to understand why they inevitably get scrambled rather than unscrambled.

The only randomness I am aware of in the marbles hypothetical is the initial arrangement of marbles in each box. That’s why I am so confused as to how Half Man Half Wit gets his probabilities.

~Max

It’s not part of it, as such, but it determines it completely.

There’s really no sense to considering a second throw in this example. As the system only has two available states, it has completed a full cycle after the first throw. But consider a coin with four possible initial states (n. B.: those don’t correspond to ‘heads up’, that’s the macrostate; rather, you can consider these states as ‘ways of making the initial throw’), A and B, which yield heads and tails respectively on the first throw, and C and D, which do likewise. However, they’re distinguished by the fact that both A and B will yield heads on the second throw, while C and D yield tails on the second throw. This upholds the stated determinism, but the second throw has, no matter the outcome of the first, a 50% probability of coming up heads.

Sure, but I would ask you to take the hypothetical in the spirit it was proposed, and not try to pointlessly fight it—you’re again trying to use where you think this will end up to fight the argument you think I’m making. But you’re just getting sidetracked, arguing irrelevancies, so I would ask you to just take what I initially stated—an experimenter observing the system in such a way as to only be able to distinguish gross greyness, in the same way you’re only able to distinguish gross greyness in the pixel images with only the naked eye and a monitor of sufficiently high resolution and quality.

No. You only need to count.

Take my original system. It has 27 microstates. We can represent the microstate by means of a 27-dimensional vector, that has a ‘1’ in the row corresponding to the microstate, and a ‘0’ everywhere else, like this:



          (0)
          (0)
          (.)
          (.)
          (.)
     S[sub]i[/sub] = (1)
          (.)
          (.)
          (.)
          (0)


Where the dots represent ellipses, and the index i numbers the states from 1 to 27, such that it always just gives the row where the ‘1’ appears. We can order the states as before, with the corresponding microstates to each macrostate listed:

[ul]
li: S[sub]1[/sub] - S[sub]6[/sub][/li]li: S[sub]7[/sub] - S[sub]9[/sub][/li]li: S[sub]10[/sub] - S[sub]12[/sub][/li]li: S[sub]13[/sub] - S[sub]15[/sub][/li]li: S[sub]16[/sub] - S[sub]18[/sub][/li]li: S[sub]19[/sub] - S[sub]21[/sub][/li]li: S[sub]22[/sub] - S[sub]24[/sub][/li]li: S[sub]25[/sub][/li]li: S[sub]26[/sub][/li]li: S[sub]27[/sub][/li][/ul]

Any given evolution law of this system will be a 27 x 27 - matrix M, which takes one 27-dim vector to a different 27-dim vector by ordinary matrix multiplication:

S[sub]j[/sub] = M*S[sub]i[/sub]

(That is, the system starts out in S[sub]i[/sub], is evolved for one timestep, and ends up in S[sub]j[/sub].)

The matrix evolves S[sub]i[/sub] into S[sub]j[/sub], if and only if the i-th entry in its j-th row is ‘1’.

This matrix must obey some constraints. First, any given state can only evolve into one other state (this is determinism). Consequently, the matrix can only have one entry different from 0 in each column.

Likewise, only one state can evolve into any given state (this is reversibility: each state must have a unique precursor). Consequently, there can be only one non-zero entry per row, as well.

Thus, all the valid laws of evolution for the system take the form of a matrix M such that each row and column have exactly one entry equal to 1, and the rest 0 (there are thus 27 nonzero entries in the matrix).

Now, we can count how often each of these evolutions will increase vs. decrease entropy. Take first the upper left 6 x 6 - submatrix: any entry here will only ‘mix’ the maximum-entropy states among themselves, thus yielding to constant entropy.

Then, take the 18 x 18 - submatrix ‘in the middle’. This mixes the intermediate-entropy states, thus likewise not leading to entropy increase. Same for the 3 x 3 - submatrix in the lower right corner, which mixes the min-entropy states.

Now take the lower left 3 x 24 - submatrix (that is, everything left over if you take the lower right 3 x 3 away). These are more interesting: they are entries that take any of the 24 states of nonminimal entropy, and transform them into stats of minimal entropy. These are, thus, entropy decreasing. Because of the properties of the matrix, there can be three such entries.

Then, take the left 18 x 6 - strip: They are the entries that take any of the 6 maximum entropy states to one of the 18 intermediate ones. These are, likewise, entropy decreasing; again, there can be maximally 6 entries here (but, if there are any entries here, we have to be careful not to double count: if there is an entry in the first three columns, there can’t be any entries in the lower left 3 x 3 - submatrix, since this would entail a maximum entropy state both transitioning to a lower entropy and a minimum entropy one, in violation of determinism).

These are all the entries that may lower entropy: transitioning from a maximum entropy state to either an intermediate or minimum entropy state, or transitioning from an intermediate entropy state to a minimum entropy state.

Consequently—and here’s the kicker—any evolution law we can write down for the system can have at most nine entries that lead to a lowering of entropy, and must, consequently, have 18 entries that keep entropy constant or increase it. Hence, no matter what the microdynamics are—no matter which particular M we choose—it follows, just from the count of microstates, that it takes any given state to a lower-entropy one at most one out of three (9/27) times. Consequently, entropy increases or stays constant more often than it decreases.

And that’s all I’ve been saying.

The second throw only has a 50% probability of coming up heads if we assume the probability of each state is equal for subsequent throws.

Conceded.

Matrix operations are just beyond my level of mathematics education, so it will take me a couple days to fully understand what you have written here. But it seems to me that this sentence need not be true: “Thus, all the valid laws of evolution for the system take the form of a matrix M such that each row and column have exactly one entry equal to 1, and the rest 0 (there are thus 27 nonzero entries in the matrix).”

Could not the row and column both be all zeroes? For situations where some microstates are never visited. My intuition is that most systems do not visit every microstate.

~Max