GPS position calculation: how long does it take?

I have a portable GPS receiver that updates position information about once per second. I also have GPS navigation in my car, and it appears to update about twice per second.

How fast is the actual calculation? That is, how frequently could a GPS receiver update its position? Assume for the sake of this discussion that we have the computing power of a mid-grade laptop PC available.

I have two Garmin GPS receivers, one a handheld for geocaching and another for road use only. Both have been superseded by newer models. The handheld I’ve used aboard commercial aircraft (above/below 10,00 feet) and find the updates are continuous, even while flying at 600 mph.

I also have one of these I attach to a laptop running Win7. It also does continuous updates.

Assuming a very fast computer and strong GPS signals, probably dozens of times a second or more. I have a Garmin Edge 705 for my bicycles that can record location every second for downloading later. If it can record info every second, you’d think that it must calculate the position at least that often if not more.

The spec for that last one you’ve linked to says it updates once per second. For consumer units, anything much beyond that would just be a waste of power for most uses. Though I believe there are consumer chipsets that support 10 updates per second, whether or not the device actually makes use of that.

Then there are high-precision/high-dynamics commercial and scientific receivers that update every 10 milliseconds for independent GPS solutions, and more when interpolated with inertial sensors.

Commercial GPS systems such as the Bad Elf Pro can update as quickly as 10 Hz (10 samples per second). However, most typical personal navigation devices update on about 1 Hz rate, which (at least in theory) should be more power efficient for portable devices. Aerospace and military grade systems tend to update at higher rates (20 to 100 Hz) and are more capable of handling signal drops. many also integrate “strapdown” inertial navigation system which updates state information (instantaneous acceleration and rotation rates) and interpolates to enhance position and rate accuracy, often at rates on the millisecond range. Note that a higher rate of measurements does not necessarily increase accuracy; depending on how signals are amplified and integrated by the receiver there can be substantial errors; for instance, low powered handheld units for hiking are often significantly less accurate than vehicle or survey grade GPS. Aerospace grade GPS units used on aircraft and rockets which have significant rate of state change will often use an elaborate form of running average (Kalman filtering) to estimate and correct for errors in order to obtain more accuarate state information needed in those applications.

Stranger

Of course, the higher update rate helps with this–the more samples you have, the more information you have to feed into your filter. Even a simplistic straight average of 10 samples is useful; if you start with a 10 Hz unit, you end up with a higher-accuracy 1 Hz unit. 10 Hz GPS units are fairly cheap these days; I use this module on my quadcopter.

Each of the GPS signals, for the consumer-grade stuff, uses a 1023 bit repeating code that repeats every millisecond, for 1023000 bits/s. In some sense, that is a practical bound on update rate (the military codes are 10 times faster).

However, this isn’t really an upper bound. An exquisitely sensitive antenna/receiver wouldn’t need to detect the whole code to figure out the phase; there are only 32 codes, so in principle you would only need a handful of bits (approximately 15: 5 for the code number and 10 for the position in the code). So a true upper bound on update rate is in the ballpark of 50 kHz, though I can’t imagine that anyone has achieved close to this.

On the other hand, receivers with poor sensitivity might need to receive many frames to reliably distinguish between satellites, so even with infinite computing power there may be a low effective update rate.

Yes, but a collection of samples taken in rapid sequence will tend to have similar errors, so just having more samples does not necessarily allow you to correct for the error by simple averaging without more extensive trending information.

Stranger

When I was doing development on a receiver that would do 100 Hz, processing power was indeed a limiting factor. Generating measurements and positions that fast was fine as long as you weren’t trying get it to do too much else at the same time. Part of the work was getting it to degrade gracefully when CPU ran out during, for example, carrier phase ambiguity resolution. I think this was with some kind of ARM9.

But even at 100 Hz, you are just starting to go up against what you can get out of the bandwidth of the weak GPS signals. You can generate measurements faster, but to get them accurate enough to be useful in any scenario that needs the high rate, you’re filtering enough that it becomes more marketing than truth to call them independent measurements.

It certainly depends on where the error is coming from. Some of the error will just be internal hardware noise, and averaging that out will indeed improve performance. Other error comes from refraction in the ionosphere, and short term averaging won’t help that any (although geologists can track millimeter-scale plate movements by averaging over many days, so that ionosphere error is substantially reduced).