The straight dope on Hubble Telescope images

Apparently today is the Hubble Telescope’s birthday of sorts. I am very much a fan of the quality of images gathered by this device, espcially post- fix.

Here’s my question. Take a look at This image. The Eagle Nebula photo, apparently. It’s more than 9 light years tall. :eek:

I remember reading somewhere that these gorgous multi-colored images are very much a creation of those folks who initially download and handle them. That the depth, layering and color gradations are actually added in on earth and that in fact, the images captured by the telescope do not really look like this. In many cases, apparently, the dust clouds or “backlight” is not visible to the Hubble, but is added in based on telescope data gathering from the Hubble and other Earth-based devices, such as the V.L.A.

Do the photos really look like this in their “native state”, or are they heavily colorized and composited? Yes, yes, I know- each white-point of light does NOT have a cross in it. That’s clearly an after-effect created in a piece of software.

Cartooniverse

I remember reading somewhere that these gorgous multi-colored images are very much a creation of those folks who initially download and handle them.

I am not an astronomer, but from everything I have read and heard, you are correct about the images being colorized on earth. Hubble, and most if not all telescopes, take several views of each image with each view at a different wavelength. So a Hubble image consists of several BW views of the same object. They are assigned colors and overlaid to produce the pretty picture that keeps congressional interest in Astronomy at the level it is.

The “reality” problem is more basic than color. One astronomer explained it this way: (I am paraphrasing) of course the picture is real. It is the same image that anyone with 100 inch lenses in their eyes would see. The detail and angular dimensions of the image are a result of the optics of the telescope, and the human eye simply can’t duplicate that. So is the image real or is it made up by the telescope? It’s data, in this case wonderful and pretty data. It is as real as any other data set.

No need to use computers, one trick I´ve learned it´s used to get those nice streaky stars is to place multiple rows of transparent fishing line across the telescope opening, crossed at 90 degree angles.

Of course, I suspect that´s not how they did it on the Hubble.

But that’s the way “normal” color photography, both film and digital, always works.

-grin- Try using a real silk stocking. Buy one. Cut a circle just big enough to fit around the end of the lens. Stretch tight. Hold with rubber band or rubber cement.

Yes, rbroome, this is just what I have read. However… I accept as completely real an image that my eye cannot resolve by itself. I’m a professional camera operator. I’d damned well be willing to make that visual leap !! :wink:

During my junior year as a physics major, we had a lab where we used a remote .9 meter telescope on Kitt Peak to take images of star clusters. We recieved images straight from the telescope and they had the crosses on bright points of light.

In some ways these pictures may be fake to you, but in some ways they are not.

I am guessing at this, but let’s say they took the picture in the infrared spectrum. They will say lights of this certain wavelength will be red, green, or blue. I would guess they are saying something like this:

4000nm – 5000nm light will now be redish
3000nm – 4000nm light will now be greenish
2000nm – 3000nm light will now be bluish

From these they can make a color picture. Yes, the parts that are different colors in the picture really are different colors, just not the ones you see.

Here’s an excellent article on false-color astronomical images.

The crosses are caused by diffraction of light by the support structure that holds the telescope’s secondary mirror.

The color is “false” in the sense that it doesn’t correspond to what the human eye sees. But it’s always generated from the data; nobody would colorize an image manually. One way to do this, as already explained, is to take images in three different wavelengths. The shortest wavelength image is used as the blue channel, the middle one as green and the longest as red. All three might be in radio wavelengths, or all three in X-rays.

Other times, you just have one image and use false color to bring out the subtle variations of brightness. That is, instead of displaying an image with a black-gray-white color scale, it’s displayed in a black-red-white scale, or even black-blue-red-yellow-white scale. (Or think of it as drawing contours of brightness, and coloring each contour a different color.) This may be “false color” but it’s “real” in the sense that no features have been manually added.

It’s the same with spatial features - there is a lot of processing done to bring out the features, but no features are manually added or removed. A computer algorithm that detects specific types of artifact (e.g. dust on CCD) and removes them is acceptable; loading the image into Photoshop and using a Clone tool to hide unwanted features would not be acceptable.

As for the crosses on each star, that is an optical artifact created by the telescope. The Hubble, like most reflecting telescopes, has a secondary mirror sitting in the middle of the aperture and supported by four arms (“spider”). Light is diffracted by these spider arms and scattered into a cross shape.

Another good article

[hijack]
Incidentally, those two Hubble birthday images were supposed to be embargoed until 10:00 am eastern tomorrow morning. I imagine someone at STScI was pissed to see them show up two days early.
[/hijack]

Ahhhhhhhhhhh yes, scr4 and others. Please don’t misunderstand me. I do get it, that the photos make use of data and the colors are “representational”. That was the reason for my O.P.

However, I take a snapshot of Aurora Borealis, I get what I get. I see a photo akin to the one in my O.P., and I wonder- is it an accurate portrayal based on data, much akin to the splendid “fractals images” that Google showed us a few years ago, one day?
It is sounding as though the images presented by NASA are not photographs in the traditional sense. They do not mislead ( at least, not to me ) although perhaps they do- were I to be floating say, 6 light years away from the nebula which is 9 light years “tall”, would I see all of the color and apparent depth that is seen in the image? I would imagine that since it’s about the largest thing I’d ever see, that it would have depth. However, have I been sold a bill of goods when it comes to just what these things look like?

How will we know, until we get close enough physically to a very large cosmic mass?

Even looking through a telescope, a view of the Andromeda Galaxy does not compare at all to the stunning photographs of it. Those photos take relatively long exposures to produce. But adjustments to shutter speed are done all the time.

Here’s an exercise. Look at the Andromeda Galaxy, naked eye, and try to imagine what it would look like if you were a lot closer to it. If you were twice as close to it, it would be four times brighter–but the area it would cover in the sky would be four times larger, so the brightness of any patch (surface brightness) would not significantly increase.

What if we were a lot closer? We are closer to some galaxies–the Magellanic Clouds are small galaxies close to us, but their surface brightness is not significantly different than that of the Andromeda Galaxy or that of–the galaxy closest to us!–the Milky Way. The Magellanic Clouds look like small detached patches of the Milky Way (so I’m told :slight_smile: )

Not necessarily. If you use a digital camera, you can load the image into Photoshop and play with it: tweak the color and contrast, use any number of filters to sharpen the image and bring out details or give it a soft-focus artistic look. Or even if you were using a regular camera, you could adust the shutter speed, or use a filter, or modify the image that ultimately gets recorded on film in innumerable ways.

Then again, consumer cameras are generally set up to give you, as you say, what you expect, something similar to what you saw with your eye at the same time. That’s not the purpose of a telescope, obviously. :slight_smile:

No, you wouldn’t. Actually, from very close to or inside a nebula, you would have a very hard time seeing the nebula itself because of the surface area issues that Mentlock mentions. We have eyeballs with an aperature of only about 7mm when we’re dark adapted, and our eyes refresh something like 30-60 times a second (that is, we have a built in exposure time of 1/30 to 1/60 seconds.) No matter how close you get to an astronomical object, what you see will never really match what a 1 meter telescope will record with a long exposure time. It’s also important to note also that the cones in your eye, which provide color vision, only work with relatively bright light. Most astronomical objects look relatively bland, even though a telescope.

One can certainly calculate how much light is coming from the nebula. We know how dense it is and what it’s made of, and how it’s illuminated. It’s not a great mystery. It’s just that the answer (It would look faint and bland and fuzzy.) isn’t very exciting.

Luckily, I wasn’t looking for exciting. I was looking for the straight dope on it. :slight_smile:

Lest I be vague, I adore these images. I find them humbling, and stirring in many ways. I mean, c’mon. NINE LIGHT YEARS HIGH ??? And it fits on my computer monitor? That takes some mental work just to deal with.

My query was a scientific one, not an aesthetic one. Then again, I think that huge color prints of Ebola or Herpes viruses are beautiful too. :smiley: