Why Is Space Photography in B&W?

Reflect back on images you saw of downtown New York after the Twin Towers came down on 9/11, or on what northern Japan or Indonesia looked like after their respective tsunamis, or what downtown Aleppo looks like after it’s been shelled yet again. Pretty much shades of gray and brown, with not a whole lot of color. That’s kind of the default condition of nature. Most things we THINK of as colorful are literally superficial — lipstick on the dull underlying reality — and created artificially, by us, because we like color. The exceptions are things that WANT to attract our attention, like fruits looking to cultivate us as their unwitting reproductive agents.

Link to article
Deep space probes take a long time to finance, design, build, test, replace when the lauch vehicle blows up, and then take years to get there…so you should start out assuming that the thing is built with technology 15 years or so out of date by the time you see the pictures.

Digital camera sensors have become ubiquitous in the last few years, but not when those probes were designed. A decade ago, CCD sensors were not all that great, and CMOS sensors were junk. A video camera that would fit in a pack of cigarettes was the stuff of spy novels 2-3 decades ago. And CMOS or CCDs are not great for a high radiation space environment, and solid state stuff in general doesn’t like the temperature extremes it is likely to need to tolerate. The processor that is cheap and ubiquitous on your desktop probably won’t work and/or last in space without drastic changes. “Rocket science” is a real thing, not just a term of derision. The market for space rated electronics is minuscule, so things are outrageously expensive, and often years behind the consumer market.

Some of the sensors are not really an image sensors at all. They area single point sensor, one pixel if you will. This is then scanned by rotating mirrors and/or spacecraft rotation/motion to produce an image. Not sure of current practice, but earth weather satellites used to do this. Seems clumsy, but it avoids mismatches in sensitivity between pixels, and you only have to make the one pixel work. Even small mismatches between pixels would add noise to the data. Also, there are some tricks that can be done to tease more resolution from this than is possible with a multi-celled sensor, which can never resolve better than the pixel pitch, and has tiny blind spots between pixels…huge blind spots in color sensors because the Red and green sensors (say) are blind to blue.

But yeah, otherwise what the article said.