I was reading in Wired that geosync orbits are subject to perbutations. I knew that, but it mentioned that the perbutations if left uncorrected would lead to one of two stable points, one over the Pacific Ocean and one over the Indian Ocean. What is so special about these 2 points that make them stable for geosync orbits?
I don’t know the answer, but my father does. If you don’t get a good answer in the next day or so I will ask him.
From here:
http://www.eumetsat.int/Home/Main/Satellites/SatelliteProgrammesOverview/SP_20100427132020806?l=en
engineer_comp_geek has already answered the question of the o.p. with a succinctness I could not hope to improve upon, but an additional point to be made is that each of these zeros (areas of local stability) has an entire family of solutions of elliptical orbits. You can think of a skateboard dropped into an empty swimming pool; although the skateboard may have enough momentum to swoop around the drain (lowest point) for a few passes, it will eventually come to pass right over the drain. It should also be pointed out that this is an entirely separate phenomenon from precession of the orbit, which would happen even if the Earth’s field were exactly spherically symmetrical. There are also other minor perturbations that have to be taken into account, including influences of the magnetosphere and the tiny but cumulative drag from passing through the mesosphere.
Stranger
Related question, if I may: How big are these perturbations? Relative to the earth, how far does such a satellite wander over the course of a day or a year? Millimeters? Feet? Miles?
How do GPS systems, especially the military-grade ones, compensate for this? Is it merely by grabbing as many readings as they can, and averaging them?
You can’t make a quantitative assessment in these units as both the influence and result depend upon the orbit. From an energy standpoint, the change in energy (which comes from momentum transfer from the Earth’s rotational inertia to the satellite) is probably somewhere on the order of 0.00001% of the total orbital energy of the satellite per meter of deviation from the original orbital plane. Most satellites live in a “box” of permissible variation that is between 50m and 1000m away from the nominal trajectory depending on function. All long term satellites have station keeping propulsion systems that allow them to return to the box.
GPS satellites are among the most precisely guided satellites, both in terms of trajectory and telemetry measurement. GPS systems are regularly synchronized with a known geodetic position, and use a very complex terrestrial reference frame known as the World Geodetic System 84 that has over thrity-two thousand terms to describe the curvature of the gravitational field at all points. GPS satellites do perform regular stationkeeping maneuvers but they also track their own deviations to report position accurately. GPS receivers typically need three satellites above the horizon to obtain a good position reference, and four or five (depending on receiver and satellite position) to accurately measure altitude.
Stranger
But a consumer-grade receiver only knows where the satellites are supposed to be, not where they actually are. How can my car’s GPS position me to within 10 meters, if my distance to the satellite might be off by 1000 meters?
Quoting Stranger’s post: “GPS satellites do perform regular stationkeeping maneuvers but they also track their own deviations to report position accurately.”
I believe (but don’t know for sure) that each GPS satellite sends both the exact time and its exact location as part of its signal. So a consumer-grade GPS unit only has to have enough firepower to accurately time the interval between the transmissions it receives from the various GPS satellites (down to several nanoseconds, I believe) and then do the math to figure out (given the time delay it took to receive each of those transmissions, down to the nanosecond level again) exactly how far it is from each GPS satellite and where it would have to be to receive those transmissions riiiiiight then.
The receiver doesn’t know where satellites are supposed to be; it gets a signal from the satellite saying “I’m at this position at this time,” (ephemeris data) relative to a fixed geodetic frame. When it gets three or more signals it interpolates those based upon the timing information which gives the distance to the satellite (a quantity called pseudorange) and figures out where the locus of intersection is. (The satellites also transmit health and status info, plus the general position of other satellites in the constellation to assist the receiver in making a lock). So all the receiver needs to know is the parameters of the geodetic coordinate system to a sufficient degree of precision. It is the satellites that have to know exactly where they are, which they do by a variety of measurements and corrections, including corrections for relativistic effects.
In order to get a position down to 10m you very likely need a fourth or fifth signal. The results are compared by a process called “anti-dithering” (conceptually similar to anti-aliasing) to determine the most likely exact position of the receiver. Really good receivers with a five satellite lock can get down to a few tens of centimeters precision, which is pretty amazing when you look at the measurement scales and how even a tiny error can be amplified.
Stranger
Got it. Thanks!