Is the second law of thermodynamics routinely violated?

It’s not part of it, as such, but it determines it completely.

There’s really no sense to considering a second throw in this example. As the system only has two available states, it has completed a full cycle after the first throw. But consider a coin with four possible initial states (n. B.: those don’t correspond to ‘heads up’, that’s the macrostate; rather, you can consider these states as ‘ways of making the initial throw’), A and B, which yield heads and tails respectively on the first throw, and C and D, which do likewise. However, they’re distinguished by the fact that both A and B will yield heads on the second throw, while C and D yield tails on the second throw. This upholds the stated determinism, but the second throw has, no matter the outcome of the first, a 50% probability of coming up heads.

Sure, but I would ask you to take the hypothetical in the spirit it was proposed, and not try to pointlessly fight it—you’re again trying to use where you think this will end up to fight the argument you think I’m making. But you’re just getting sidetracked, arguing irrelevancies, so I would ask you to just take what I initially stated—an experimenter observing the system in such a way as to only be able to distinguish gross greyness, in the same way you’re only able to distinguish gross greyness in the pixel images with only the naked eye and a monitor of sufficiently high resolution and quality.

No. You only need to count.

Take my original system. It has 27 microstates. We can represent the microstate by means of a 27-dimensional vector, that has a ‘1’ in the row corresponding to the microstate, and a ‘0’ everywhere else, like this:



          (0)
          (0)
          (.)
          (.)
          (.)
     S[sub]i[/sub] = (1)
          (.)
          (.)
          (.)
          (0)


Where the dots represent ellipses, and the index i numbers the states from 1 to 27, such that it always just gives the row where the ‘1’ appears. We can order the states as before, with the corresponding microstates to each macrostate listed:

[ul]
li: S[sub]1[/sub] - S[sub]6[/sub][/li]li: S[sub]7[/sub] - S[sub]9[/sub][/li]li: S[sub]10[/sub] - S[sub]12[/sub][/li]li: S[sub]13[/sub] - S[sub]15[/sub][/li]li: S[sub]16[/sub] - S[sub]18[/sub][/li]li: S[sub]19[/sub] - S[sub]21[/sub][/li]li: S[sub]22[/sub] - S[sub]24[/sub][/li]li: S[sub]25[/sub][/li]li: S[sub]26[/sub][/li]li: S[sub]27[/sub][/li][/ul]

Any given evolution law of this system will be a 27 x 27 - matrix M, which takes one 27-dim vector to a different 27-dim vector by ordinary matrix multiplication:

S[sub]j[/sub] = M*S[sub]i[/sub]

(That is, the system starts out in S[sub]i[/sub], is evolved for one timestep, and ends up in S[sub]j[/sub].)

The matrix evolves S[sub]i[/sub] into S[sub]j[/sub], if and only if the i-th entry in its j-th row is ‘1’.

This matrix must obey some constraints. First, any given state can only evolve into one other state (this is determinism). Consequently, the matrix can only have one entry different from 0 in each column.

Likewise, only one state can evolve into any given state (this is reversibility: each state must have a unique precursor). Consequently, there can be only one non-zero entry per row, as well.

Thus, all the valid laws of evolution for the system take the form of a matrix M such that each row and column have exactly one entry equal to 1, and the rest 0 (there are thus 27 nonzero entries in the matrix).

Now, we can count how often each of these evolutions will increase vs. decrease entropy. Take first the upper left 6 x 6 - submatrix: any entry here will only ‘mix’ the maximum-entropy states among themselves, thus yielding to constant entropy.

Then, take the 18 x 18 - submatrix ‘in the middle’. This mixes the intermediate-entropy states, thus likewise not leading to entropy increase. Same for the 3 x 3 - submatrix in the lower right corner, which mixes the min-entropy states.

Now take the lower left 3 x 24 - submatrix (that is, everything left over if you take the lower right 3 x 3 away). These are more interesting: they are entries that take any of the 24 states of nonminimal entropy, and transform them into stats of minimal entropy. These are, thus, entropy decreasing. Because of the properties of the matrix, there can be three such entries.

Then, take the left 18 x 6 - strip: They are the entries that take any of the 6 maximum entropy states to one of the 18 intermediate ones. These are, likewise, entropy decreasing; again, there can be maximally 6 entries here (but, if there are any entries here, we have to be careful not to double count: if there is an entry in the first three columns, there can’t be any entries in the lower left 3 x 3 - submatrix, since this would entail a maximum entropy state both transitioning to a lower entropy and a minimum entropy one, in violation of determinism).

These are all the entries that may lower entropy: transitioning from a maximum entropy state to either an intermediate or minimum entropy state, or transitioning from an intermediate entropy state to a minimum entropy state.

Consequently—and here’s the kicker—any evolution law we can write down for the system can have at most nine entries that lead to a lowering of entropy, and must, consequently, have 18 entries that keep entropy constant or increase it. Hence, no matter what the microdynamics are—no matter which particular M we choose—it follows, just from the count of microstates, that it takes any given state to a lower-entropy one at most one out of three (9/27) times. Consequently, entropy increases or stays constant more often than it decreases.

And that’s all I’ve been saying.