What causes this effect in Google Maps?

If you search for “100 St Charles Rd, Pumphrey, MD” in Google Maps, you can see an airplane heading for BWI. When you zoom into the map, the plane in the photo appears to be shifted into different colors of the spectrum.

What causes this? I get that the plane is moving rapidly and the photo is taken from high altitude, but why this effect instead of a blur?

The pixels in the image sensor in a digital camera are typically read sequentially - not all at once - this is called ‘rolling shutter’ - this usually just results in parts of the image breaking up, but what appears to have happened here is that the camera is sampling each component colour in sequence - and for a fast moving object such as this plane, it isn’t in the same place for each sample.

Thanks. That makes sense. I don’t know that much about digital photography, but I take it this is a RGB scheme and so there’s white in the center of fuselage where all three colors overlap in the image and separation of color where the wings are separate in the image (and yellow and cyan where the do wings overlap in two colors).

Looks to be, yes. I think it should be possible to split the image into RGB channels and recompose it to show the plane properly (the background will then exhibit the colour artifacting instead).

I’m going to try that at home this evening (I have an image editor that will split channels).

Link to the plane in question, for those playing along at home.

Here’s the image, split into RGB channels, then re-registered based on the outline of the plane in each separation:

So what is the greyscale image? Isn’t greyscale just Red, Green, and Blue all set to the same intensity (but less than max–white)? It appears there’s four distinct images in the original.

The whole plane is white, so any parts of the three components that happen to show part of the plane, and overlap (even though they are not showing the same part of the plane) will show as white - not unlike the way the classic RGB combination diagram works.

Thanks for making the merged image. I should have been a little clearer about my greyscale question.

It appears there are four images of the plane: blue, green, red, and greyscale. It looks like blue was captured first, then green, then red. The greyscale is furthest in the direction the plane is moving, so this would be the last image made. As I understand it, greyscale is created by merging the three colors but at a lower intensity than what would create white.

I guess I’m asking now about how the camera made the greyscale image. Did it create the fourth image with the info from the first three? And why?

I apologize if I’m sounding pedantic, but I’m looking for a new position and digital imaging might be a possible job duty, so I’d like to know more about the process. I can use software to take an image, make minor adjustments, and save the file, but I’m curious as to what goes on under the hood, so to speak, when the image is captured.

I see it now - there’s a faint greyish outline of the plane ahead of the three colour versions (here’s a zoomed in version).

I’m not sure what that is, but it could just be a quirk of the image processing stuff that Google does when optimising the images.

Tangentially related, I found this little tool: RGB Explorer

It allows you to see what changing the intensities of the three colors does.

If it did that then the greyscale image would have been buried in the middle of the color ones, I think. So this must be a forth captured image, taken last.

It I had to guess, I’d say you were looking at a composite of four images. One high resolution B/W image for luminence and three low resolution, low-contrast color-filtered (red, green, and blue) images for chrominance, each taken separately. They’re aligned in accordance with the satellite’s more-or-less west-to-east motion taken into account, but the plane is moving in the opposite direction (plus there’s a bit of parallax error).

But that’s just a guess.

Just a note: The aerial imagery is shot from a plane, not a satellite, everywhere except pretty remote areas.

Does this mean aerial cameras use mechanical filter wheels and take the R,G,B and luminance images separately, rather than using a CCD with a color (Bayer) filter? I can’t think of any other reason why the 3 colors aren’t taken simultaneously.

Could be, or it could be something to do with the way the data is pulled out of the CCD - it’s a sequential process, but I’m not sure if it’s ever done with colour channels taking priority over the normal row-at-a-time scanning.

So - what’s the reason why (on a completely different tack) when you zoom in past the fourth-largest image the plane suddenly disappears and you’re clearly looking at a whole different picture? (different exposure, subtly different angle I think).

I don’t see that when I google-map my own location

The fully zoomed in picture is higher resolution. If you kept zooming in on the same photo it just gets blurry so they show an appropriate photo for the zoom level to give a compromise between image file size and resolution.

Edit: The images for Melbourne CBD don’t seem to get quite the same resolution as the OP’s link. The final image layer is not available I guess.

It’s pretty common for high-resolution cameras to be greyscale only. To get a full color image, they combine 4 actual shots:

  1. A normal greyscale shot (no filter)
  2. A shot with a red filter
  3. A shot with a green filter
  4. A shot with a blue filter

Obviously changing filters take some amount of time (they physically move in front of the lens), but this method works great on static or near-static images (say-- taking a photo of the ground from 30,000 feet.)

But when you have something moving fast, then it breaks down and you see what you see in Google Maps there, a “ghost” of each image layer.

I think Google just has a completely different aerial photo set, taken at a completely different time, for the more zoomed-in resolutions.