I’m wondering how ground surveillance planes like the E-8 can claim to track targets at more than 250km or how the Coyote’s radar can detect vehicles at more than 20km.
I understand how it works in terms of horizon. I can see how one could detect something moving fast hundreds of km away if it’s in the sky (the middle of nothing). However, I’m having more difficulty seeing how things moving slowly or stationary and surrounded by everything that’s on the ground can be detected or even tracked. Can someone give me a high-level explanation that’s more precise than “lots of processing enables one to pick faint signals from lots of noise”?
Also, what would be the expected resolution from a Coyote or E-8’s radar? Good enough to target with a missile (Coyote) or cluster bomb (E-8)?
Just from reading the summary about SARs, which was linked from the page on the E-8, radar images from the moving antenna are combined in a way that simulates having a larger antenna with larger resolution. Much in the same way we are now combining radio telescopes into "Very large array"s.
Right, I can see how, after a target has been tightly tracked and someone is looking at the data-fused outcome, we could identify the target.
However, detection of targets and establishment of tracking gates is done by the computer. How does it know which returns are tanks/trucks vs rock, trees, buildings, bridges?
They have different shapes and surfaces. A tank for instance is made of metal and has a different quality to the radar return than a tank shaped rock, which is made of rock. That’s just my qualified guess based on general knowledge of radar though.
I’m guessing, but scene analysis is probably the closest answer we’re going to get. Imagine for a moment the airplane is stationary. Take several scans at, say, 10 second intervals. Subtract each from the previous to see what’s different. That’s your moving stuff. Obviously that depends on very low noise levels and truly huge amounts of fast storage to retain and process the images.
You can also add correlation logic, so a single item apparently moving is a potential item of interest, but a whole batch of things nearby each other all moving in the same direction is more likely to be a real signal, not an artifact of noise. Pretty quickly that’d identify both roads with traffic, railways in use, as well as a spread formation of vehicles moving cross-country in suitable terrain.
As well, this isn’t done in an information vacuum. There’s no reason the processor isn’t also equipped with a detailed terrain map and a detailed road /rail/waterway network map. Both of which are sourced from other intel / GIS systems. So it can compare what it’s seeing with where it expects to see things.
The point being that item of interest recognition happens very far downstream from the RF signal processing. Much like your brain doesn’t “see” a pixel array from the retina. It receives and works with much more sophisticated scene analysis.
There’s magic at every processing layer, but the magic that matters for our purposes is up at the top, not down at the bottom. The critical difference between radars in the WWII through 1970s and radars today is how many layers of “smarts” have been added on top. In addition to the basic improvements in the RF details that a 1950s radar engineer could understand.
Also it’d be helpful if you defined exactly what you mean by “Coyote”. I can find lots of things with that name. None of which seem to have anything to do with GSR.
Thanks. Would they rely on return amplitude to distinguish normal cars from military vehicles?
That could show up big civilian trucks as possible targets to be sifted some other way, correct? Perhaps by looking for the presence/absence of similar returns close by since civilian trucks don’t work in formations.
Now that I think about it, NATO & Israeli tanks tend to be angular compared to Russian and Chinese tanks. Is that just because of protection-related design decisions or are they trying to make the tanks less detectable on radar?
Very much, thank you. If they could do that before solid state equipment, I can see how they could detect and track vehicles 250km away today.
It reminded me of sonar. I suppose that since both light and sound can be described in terms of frequency/wavelength, saturation (narrowness of bandwidth) and amplitude (brightness/loudness), it’s possible to break it all down into ups/downs and reconstitute it into audio/video and listen/view it like Cypher reading the Matrix.
Some of those sounds were eerie. Hearing them during a war would be even eerier.
One problem with WWII naval radar was the learning curve on how distinguish between the target ships and islands. This played a part in a number of night battles.
Time of flight and azimuth were the 1940s-1960s qualities. That was all the tech of the era could distinguish.
In the 1970s we added Doppler shift.
Now we can detect polarization, scintillation, and pulse-to-pulse variation in all the parameters. For pulses widths in the microsecond range and pulse repetition frequencies in the near and low MHz range.
That is an entire universe of additional information coming back. See Radar MASINT - Wikipedia for more.
As an example …
I’m guessing, but I bet moving tracked vehicles will imprint a distinctive polarization and Doppler signal on top of the vehicle’s main return that will be obviously different from the signal from a moving wheeled vehicle.
At least for airborne radars, the sensor is moving quickly so each sample is taken from a slightly different azimuth. Just as you gain more insight into a scene with a stereo camera than a single shot, you can extract additional features about target geometry from the different points of view.
Finally, phased array or AESA type sensors give vastly enhanced spatial resolution. Metaphorically speaking, a WWII pulsed radar took a one-pixel image 10x/second. A current tech leading edge radar takes a multi-megapixel image at 10,000 frames per second.
If you’re thinking about amplitude and Doppler, you’re almost 50 years behind the times.
In terms of range through an atmosphere full of things that block or scatter visible and IR radiation, yes.
Active illumination also has the advantage that you know what signal you sent out, so you can compare that to what came back. Relying on passive illumination you’re dealing with a bunch of extra vagaries. e.g. Trucks in the shade and trucks in the open sunlight look very different.
As well, until fairly recently visible spectrum camouflage worked well and there was no practical way to do RF camouflage.
The flip side of that advantage is the return strength declines at the 4th power of the range. So you need a powerful illuminator to “see” much. Conversely, the Sun is real bright and you don’t need to spend payload to carry or power it. Shame about the night time though.
We are just now getting to the place where good software can fuse the data from multiple sensors using multiple detection methods. And we can build enough precision into our pointing devices that we can reliably align, say, a simultaneous radar and IR snapshot of the same scene.
Which multi-sensor fusion also works to defeat camouflage in all its forms. Including RF stealth. We can now combine visible and IR scenes that “see through” all the standard camouflage netting that armies around the world have used since WWII to hide vehicles & encampments from aerial recon.
Fooling one sensor’s spectrum band is X difficulty. Fooling three sensors is X^3 as hard.
You know reading your and other posters posts here makes me think that in an era of easy detection and guided weapons; attrition rates in a peer conflict are going to be off the charts. :eek: Even sans nukes, it seems that the time in combat a unit and formation could spend until losses render it useless is days at most.
Agree completely. Between high lethality and small quantities of both weapons and targets vs. say WWII volumes of munitions and formations, we are approaching an era where even conventional all-out war is more of a wargasm than a sustained conflict.
IMO the whole conduct of the war begins to resemble Jutland: within a day of one side gaining the numerical upper hand the other is simply obliterated.
The difference between that and Jutland of course is that before being defeated in detail, any of these peers have recourse to nuclear weapons.
For once xkcd is not exactly on-point. But close enough to warrant a cite: xkcd: World War III+. The first part of that war will be fought with ultra lethal ultra tech. After that all that’ll remain on the losing side is scattered individuals with small arms and sharp sticks.