In the big rip the scale factor goes to infinity in a finite time, so there is infinitely more expansion in the last nanosecond than there is in the preceding billion years (expansion here being change in scale factor). On the other hand though, realistically, there would be a lot of expansion in the billion years preceding the last nanosecond.
Here is a graph describing the evolution of the scale factor in various scenarios. Note once phantom energy overwhelms the forces holding an object, the scale factor will be the relative size of the (remains) of the object.
Several things that come to my mind are:
The relationship between red shift and distance will go very flat for all but the very largest and very shortest distances as cosmological redshift depends on the ratio of the scale factor when the light is emitted to the scale factor when the light is received.
The future event horizon will spatially enclose a smaller and smaller volume, so you can only affect increasingly nearer objects.
Your past event horizon (which spatially encloses the observable Universe) in proper distance will get larger at rate that is faster than the increase in the scale factor (this is always true when there is an observable Universe) and the number of new galaxies entering the observable Universe will always tend to increase (again this is always true where there is an observable Universe and the topology is Euclidean).
I just wanted to correct one point in the OP. In the Big Freeze scenario, the free energy never actually vanishes. It goes to 0 asymptotically, but never gets there.
Why does this matter? It matters because some physicist calculated what is the minimal energy needed for a computation. It turns out that there is no minimum. Any amount of energy suffices. It is just that the less energy you have, the slower the computation procedes. So even in the far far future when the amount of free energy is tiny, computation and perhaps even life of some form will still be possible. It will procede very slowly and the reacting entities might have sizes measured in megaparsecs (and take gigayears to complete), but it won’t be impossible. And if there is life at that end of time, it would not realize the distances and times involved since that would be normal.
Of course, the above is mega-kilos of speculation supported by a nanogram of fact, so take it in that spirit.
Even if the rate of calculation never reaches zero, though, the total amount of computation might still be finite, with much the same effect.
The last article I saw on the topic, there was also the issue that the slowing down of your computational processes itself requires some amount of computation, and it wasn’t actually clear that this wouldn’t use up your entire budget.
I would take this point and say this seems very likely if the Universe only exists for a finite amount of conformal time. Or to put it another way how many calculations can be done when the Universe reaches a point where on average there is much less than one particle inside any hypothetical observer’s event horizon?
If the universe ever gets arbitrarily close to maximum entropy, then won’t quantum fluctuations locally reduce entropy? And more importantly, isn’t the chance of an arbitrarily large structure forming through such fluctuations non-zero?
The things analogous to black holes in a Big Rip-style spacetime are enough different from black holes sitting in flat background space that they probably shouldn’t even be called by the same name. What happens to those objects is quite complicated, and requires several pages to describe even to a specialist.