I don’t know.
What does it mean to say, “The direction of increasing entropy sets the arrow of time”?
Why not say, “The arrow of time sets the direction of increasing entropy”?
I don’t know.
What does it mean to say, “The direction of increasing entropy sets the arrow of time”?
Why not say, “The arrow of time sets the direction of increasing entropy”?
What arrow of time?
Doesn’t matter.
Consider the concept of heat. The concept has been known for a long time, as with its properties–such as that if you touch a hot thing with a cold thing, the hot thing will get colder and the cold thing hotter. But the basis was unknown early on. For a while, it was thought to be a kind of fluid that permeated matter.
We know now that that’s false, and heat is just a measure of the kinetic energy of the particles. There’s no actual heat substance. People in the past didn’t know this, and yet when they talked about heat they meant exactly the same thing as we do today. So we can say now that when they talked about a hot iron, what they really meant is that its component atoms were jiggling furiously.
The beauty of abstract notions like heat, entropy, energy and so on is that as soon as we have a better definition or explanation, it applies to everything that was said in the past, too. It doesn’t matter that they didn’t know it at the time.
So are you saying that if I develop a device that in a closes system moves energy from a cold region into a hot region, It automatically follows that I can make a device that makes me younger? It really seems to me that there is an intuitive concept of time that involves past present and future. But that scientists didn’t really have a way to quantify this any more than the are able to quantify consciousness. However they eventually found one quantity that isn’t symmetric in time and defined past, present and future based on that. Its like saying in a purely newtonian universe distance can’t exist because be define length in terms of a universal speed of light.
I don’t know who you’re addressing. I made clear that I was speaking of the intensity (or even reversal) of the entropy arrow. “Time slowing” was introduced by Mr. Barnacle, but I think he’s clarified this as a very informal way to describe the dwindling of the causality arrow (“slowing of physical processes”) near equilibrium.
For more definiteness in this sub-thread, perhaps we should consider the Gold Universe. IIUC, Stephen Hawking once accepted the possibility of such a universe, in which time’s arrow reverses after an entropy maximum. He later rejected this possibility but, IIUC, did so reasoning just from statistics and the Big Bang boundary condition. But what if we suppose, for example, that there is a second boundary condition, the future Big Crunch?
What do you mean by a collection of particles behaving backwards, and why would that reverse time?
If you mean that a local patch of space has temporarily decreased it’s entropy, that is not the same thing as time reversing, it’s just a statistical blip. If you were in that patch of space, and it continued reversing entropy against tremendous odds, then you may get the perception that the space you are in is moving in reverse to the direction that the rest of the universe is moving in, but that would stop as soon as you stopped compounding the statistically anomaly by continuing to reverse entropy. IOW, ridiculously unlikely, won’t last long, and would only give you the slight perception of this part of space is moving in reverse tot the rest.
Put it this way, essentially, what you are asking, is, if you take a deck of cards that are in order, and shuffle them until they are randomized, then, when the next shuffle puts them back into order, does that mean that we traveled back in time to before the cards were shuffled? (only difference is that what you are asking is astronomically more unlikely)
I have no idea what you mean by this.
Cite that time doesn’t go backwards? I guess the fact that nobody has ever found an example and that giant headlines and Nobel prizes don’t exist for someone having done so. Similar to invisible pink unicorns.
Of course you can make a machine that makes you younger. There’s no law of physics against it, and a great deal of medical researchers now and throughout history have devoted a lot of effort to trying to make one. That doesn’t mean that it’s easy, though.
I do not understand the refusal to indulge a hypothetical. I think it’s already been conceded up-thread that a physical system might be observed to lose entropy rapidly. The probability of that would be very small (approx. 0.00000000000000000000000…)* but still non-zero.* If we admit that possibility at least as “thought experiment” and then read …
… it’s natural to ask Does this mean that the hypothetical low-probability reverse-evolving galaxy would have a reversed arrow of causality?
Sure. The only issue is that your anti-entropy device will also cause you to remember the future (where you’re younger) and anticipate the past (where you’re older). So you haven’t avoided the problem of looking forward to a life of decrepitude.
The physical “speed” of time doesn’t seem to relate to the thermodynamic one. Time moves at 1 s/s, no matter what. Now, if we slow down the entropic processes of, say, a person, then we can certainly imagine them perceiving time more “slowly” than others (i.e., they see the universe evolve more rapidly than otherwise). But that’s just a perceptual thing.
I don’t think it’s a problem to say that an object in total equilibrium doesn’t have an arrow of time. It looks the same in both directions; there’s no clock we can use to differentiate the two. However, once we’ve picked a direction, we can’t say that time moves at any old speed. EM radiation still moves at 1 c. We just can’t tell the difference between emitters and receivers.
Why not? We can imagine simpler problems. I hit a cue ball, it hits the 1, which hits the 2, which eventually slows down due to friction. What caused the 1 to move? The cue ball. The cue ball moved because I hit it with a stick, because my muscles pushed on it, because my brain… etc. In our reverse-entropy galaxy, the 2 ball started spontaneously moving due to the combined efforts of thermal movement in the air and table. It then rolled across the table and hit the 2.
Cause and effect are somewhat dubious notions in the first place. Any physical action has to be consistent with the universe both in the past and the future. We put greater weight on the past because that’s what we remember and we think of the future as not having happened yet. But if there is indeed a boundary condition of some kind in the future, then stuff happening now is caused as much by that as it is by the low-entropy Big Bang.
I hope more of the Board’s physicists weigh in on this interesting question. This view seems consistent with Chronos’ view, though he hasn’t said so explicitly.
A second boundary condition might make the system overconstrained. OTOH, can’t it be argued that, à la Schrödinger’s cat, the present “Uncertainty” model is underconstrained?
You might run into a philosophical problem distinguishing between a reversed direction of causality, and something that just looks indistinguishable from a reversed direction of causality.
By way of analogy, imagine a moviemaker, producing a bit of computer-generated imagery for a movie. The director want one scene to include someone bumping a coffee mug on a table, which then falls to the floor. Very well, we can create a model of the cup, and of the table, and of the person who bumps it, and of the laws of physics (including entropy) that govern all of them. Put all of those into a computer, along with some initial conditions about where the cup is originally and how it’s bumped, and it can generate very realistic imagery of the coffee cup sliding to the edge of the table and then falling to the floor.
But now, suppose that the director doesn’t just want the cup to hit the floor: He wants the cup to hit the floor with the logo on the cup right-side up, facing the camera. Most initial conditions won’t lead to that. So instead, you start the model with the cup hitting the floor just the way that the director wants it, and run the model backwards from there (including decreasing entropy), to figure out just what exactly initial conditions you need to make the cup hit that way. Put it all together, and you end up with a movie of the cup starting off on the floor, jumping up into the air, and sliding onto the table and hitting the actor’s elbow. Which way is the causality actually running in that sequence?
And now imagine that we’re not in a movie any more, and through phenomenally improbable coincidences, entropy happens to locally reverse long enough for a coffee cup on the floor to go through those bizarre reversed motions. What way is the causality there?
It might help to understand that entropy, fundamentally, is a measure of information. Yes, I know that you’ve heard it described as “disorder”, and that seems opposed to information, but really, it isn’t. Consider, for instance, a set of moveable type, like you’d use in a printing press. When it’s not being used, the printer keeps it all sorted away in a tidy little cabinet, with a drawer for each letter: All of the 'a’s in one fairly large drawer, all of the 'b’s in the smaller drawer next to it, then all of the 'c’s, etc., all the way to a small drawer at the end for the 'z’s. That’s a low-entropy state, and it carries very little information. Now imagine that the printer is making a newspaper: He takes those letters out of their nice neat drawers, and arranges them in particular ways, to tell people about a hurricane. Or about a celebrity getting married. Or about a declaration of war, or about humans landing on the Moon, or about any other newsworthy subject. Now, there’s a lot of information.
Is this an accurate description of information? It seems like it relies on some measurement about the state of the universe that seems arbitrary and/or difficult to unambiguously describe.
Information in your examples seems to be based on a mapping of the state of the letters to the humans observing the letters and/or at minimum the human that rearranged the letters. If the people speak a completely different language so they are unable to read the letters, then the amount of information is limited to the person that rearranged them (because clearly he/she can read what he just wrote).
But what if the original state of the letters, arranged neatly in groups and sorted, etc. actually maps perfectly to a detailed story in some bizarre language that most of the people on the planet read/speak? And rearranging for an English story is the same as randomly scattering them. In that case the grouped and sorted letters convey more information.
So, it seems like the amount of information must be some measure of the strength of the mapping of the state of part of the universe (i.e. letters) to the state of a different part of the universe (i.e. observers brains which contain compressed experiences related to letter sequences).
What appears as random letters for one set of people might appear as containing a lot of information to a different set of people and vice versa.
What am I missing?
What appears as random letters will contain a lot of information for speakers of all languages. Some of them might not find it to be very interesting information, but they’d all agree, more or less, on the amount of information.
… But what if the original state of the letters, arranged neatly in groups and sorted, etc. actually maps perfectly to a detailed story in some bizarre language that most of the people on the planet read/speak? And rearranging for an English story is the same as randomly scattering them. In that case the grouped and sorted letters convey more information.
…
What appears as random letters for one set of people might appear as containing a lot of information to a different set of people and vice versa.What am I missing?
In this thread about thermodynamic entropy, “information” refers to a simplistic mathematical measurement from coding theory. In that sense, redundancy (or “patterns”) is the opposite of information.
When applied to human communication, perception, or knowledge, information has a different meaning. Now the redundancy (patterns) may be essential to the “information.”
Languages may provide a nice demonstration of this difference. Two error-free computers might converse using an entropy-maximizing code. But a human baby would never be able to learn such a language. Human language requires both information (in the mathematical sense) and redundancy.
In this thread about thermodynamic entropy, “information” refers to a simplistic mathematical measurement from coding theory. In that sense, redundancy (or “patterns”) is the opposite of information.
When applied to human communication, perception, or knowledge, information has a different meaning. Now the redundancy (patterns) may be essential to the “information.”
Languages may provide a nice demonstration of this difference. Two error-free computers might converse using an entropy-maximizing code. But a human baby would never be able to learn such a language. Human language requires both information (in the mathematical sense) and redundancy.
Ok, thanks, that helps.
In this thread about thermodynamic entropy, “information” refers to a simplistic mathematical measurement from coding theory. In that sense, redundancy (or “patterns”) is the opposite of information.
When applied to human communication, perception, or knowledge, information has a different meaning. Now the redundancy (patterns) may be essential to the “information.”
Languages may provide a nice demonstration of this difference. Two error-free computers might converse using an entropy-maximizing code. But a human baby would never be able to learn such a language. Human language requires both information (in the mathematical sense) and redundancy.
Hopefully, I don’t get in too much trouble for this, but this is how I think about entropy, information, and randomness. If this analogy is particularly problematic, I welcome criticism, as this is one I am coming up with on my own, and any criticism would help my own understanding, I believe.
Take a deck of cards. Forget the suites for a minute, and go ahead and number them sequentially, and then take 'em up to 100, an even round number.
Start with them all in sequence. You can give the total information about the system in one statement “Cards 1-100 in order”.
Start shuffling them around. For this, we are not going to just shuffle them all at once, instead, we will take the “1” card, and give it a 10% chance of swapping with the card to the right. We go down the cards like that “2”, “3”,… up to “100”. The card to the right of the last card is the first card, so if it swaps, it goes to the front…
After you do this a sufficient amount of times, there are no patterns that you could predict to emerge from it. You could not have said at the beginning that there would be a sequence “33”, “34”, “35”, “36”, but that sequence could exist. Though it is in a random state compared to the beginning state, there are patterns in it. Rather than requiring 100 statements to describe it, if one statement can describe 2 or more cards in order, I’m not entirely sure how to do the math on the probabilities, but I’m going to kinda WAG that e rears its little head, and say that it’ll be around 63 statements. If you allow statements like “2-8 divisible by 2” to represent “2,4,6,8”, those patterns would further reduce the number of required statements.
IOW, it is unlikely that the sequence ever approaches 100% entropy, even though you are shuffling it randomly. Patterns will be destroyed by the randomization process, but they will be randomly created too. Small areas of the deck will find themselves at lower entropy states than before, but this does not mean that the process has been reversed.
Make up some new rules, for instance, a number is more likely to swap, if that swap moves a higher value card towards other high value cards, or if it moves a lower value card away from lower value cards. Now you have a gravity analogue.
Throw in some suites and colors, where 1/3 is red, 1/3 black, and 1/3 white (neutral), and make a rule that a red or black card is more likely to swap if it moves it away from concentrations of the same color. This is a bit like electromagnetism.
Could do suites to represent the strong force, and even the weak, but that’s complicated, and I’m sure I’ve already stretched this far enough, though, if you wanted to represent the expanding universe, just add a new card into the deck from time to time that is 0 value, neutral “charge”, and neutral to any other rules, other than gravity. This seems as though it would simulate the way that the universe is expanding, as clumps of cards with value get separated by wider and wider gulfs of valueless cards, until it is impossible for a card to make it across the span. You would see that the clumps are bound together by the “forces” implied by the rules, even though the clumps are moving apart from each other.
Anyway, point is, with some simple rules, you can create a system that is largely random, but from which still emerges patterns and transient areas of lower entropy.
If the deck of cards is well-shuffled, then typically the best you’ll be able to do for describing the “patterns” is just listing out the cards in order. The overhead for the language for describing the patterns makes up for any space that you save by describing those patterns, and the more compression you can get out of a pattern, the rarer that pattern will be.
If the deck of cards is well-shuffled, then typically the best you’ll be able to do for describing the “patterns” is just listing out the cards in order. The overhead for the language for describing the patterns makes up for any space that you save by describing those patterns, and the more compression you can get out of a pattern, the rarer that pattern will be.
Well, yeah, and that’s why you cannot compress random data. It takes more than a single bit to point out that the next two bits are the same. You’d need to have at least 4 bits in a row to have any chance of compressing them, and that’s unlikely enough in random data that it’s not going to get you anything much. I’m pretty sure that all known compression algorithms will spit out a larger file if you try to compress random data (which is why you compress before you encrypt). Obviously, a text file is going to have far more patterns, and a much lower entropy that can be taken advantage of for compression.
I’m just talking about the number of statements needed to describe the system, even if the statements themselves take up more “room” than the data that they refer to.
But then you have to decide what counts as a “statement”. Like, is it a valid statement to say “The order is 72, 19, 24, 57, 56, 1, 23, 98…”? If not, why not, and what then is a valid statement?