I’ve tried, trust me, to learn up. People here are the best at helping civilians out with this sort of thing.

I have a cock-eyed I-think-I’m-comfortable understanding of phase diagrams. Higher order states, phase spaces, as words only, by inference. Just as a nouns, not as to their ramifications (heh).

I’m thinking that I would like simply to know enough of the idea to use it, more or less correctly, in non-technical situations, as we all do with a million other words.

Let’s start with the idea of a system’s dynamics describing a “flow” on phase space. A given state of the system corresponds to one point in phase space (x, p). (For brevity, we’ll use “x” and “p” to denote all of the coordinates and their conjugate momenta.) If we start a system in this state, it’ll change its x and p values (in accordance with Hamilton’s equations), so we say that the dynamics generates a trajectory on phase space. You can visualize this action as sort of a current on phase space, which pushes the systems through phase space in exactly the way that their x values and p values would naturally evolve. Similarly, if we have a whole bunch of systems with different starting conditions, the “cloud” of points corresponding to them will get pushed along by this current, and likely will be stretched out, distorted, and so forth.

There are many definitions of an ergodic system, but usually it means something like the following: for any “cloud” of initial systems, all having the same initial energy E, this “cloud” will eventually get smeared out evenly over the space of all possible states with energy E. This is useful in statistical mechanics because it implicitly assumes that all states of an given energy are equally likely. If we start off with a whole bunch of systems with similar initial conditions, so that the “cloud” they form in phase space is small, we can still be assured that eventually they’ll get mixed up sufficiently that they’ll sample the accessible regions of phase space (i.e., those with energy E) nice and evenly.

My old astronomy prof used the term in relation to the gaps in Saturn’s rings. Since the gaps tend to be related to the nearby moons, the “ergodic” effect is that gravel in those locations tends to wander, physically. They are attracted to lots of different places, both in real space and in orbital phase-space.

I think **MikeS **already covered the statistical mechanics definition above, so I’ll just say that there’s a related mathematical definition (also used in several other areas) of ergodicity: A measure-preserving transformation T on a probability space is ergodic if the only invariant subsets have measure 0 or 1. The ergodic theorem then says (in slightly more detail) that time averages equal space averages; that is, averaging a function f along the trajectory x, Tx, T^2x, T^3 x, … is just the average of f almost surely, independently of x. The invariant subset hypothesis can be thought of as saying that unless you have almost everything or almost nothing in a set X in your space, applying T to a small neigborhood in X will ultimately cause it to escape X. Going back to statistical mechanics, that’s like saying that particles in a gas (the canonical example of an ergodic system) will ultimately bounce around the entire box if left long enough. The ergodic theorem there is, in very rough terms that shouldn’t be taken seriously, saying that particles move around the box errratically enough that they’re likely to be just as likely to be in any one particular part of it as another, so averaging in time in equivalent to averaging in space.

Also, for a more visual example, consider the example T(x) = x + t (mod 1) for fixed irrational t on the unit circle. (If t = p/q is rational, then T^q = id, and T is not ergodic). For arbitrary x, the points x, Tx, T^2 x, T^3 x, … evenly cover the circle. (Plotting the points in something like Mathematica may be useful here). The upshot is that averaging a function f over the T^n x for fixed x is therefore equivalent to averaging f over the entire circle, independent of x; in both cases, the fraction of the points in any interval X on the circle is proportional to the volume of X (but more rigorously and dressed up in more specific measure-theoretical language, etc.)

A simple, concrete example: Suppose you have a gas of a whole bunch of particles bouncing around. If you take a snapshot of all of the molecules, and measure all of their velocities at once, you’ll find some distribution of velocities. Alternately, you can pick one single molecule, and follow it through the course of many collisions, and get a distribution of the velocities of that one single particle. If the system is ergodic (it almost certainly is), then those two distributions will be the same.