Many of the stunning pictures we see from Hubble etc are false-colour pictures, where oxygen is coloured blue and so on. Is this available for the amateur? Say I had a decent telescope, could I fit a CCD eyepiece that would feed to my PC which would automatically false-colour the image?
You need more than a CCD, you need to do a spectral analysis of the light: [Amateur_Spectroscopy](Astronomical spectroscopy - Wikipedia Spectroscopy)
Also remember how the human eye works. We have rods and cones; Rods see better in low light in black and white, while cones see different colors. (some see red, green or blue) So when you look through a telescope, you see grey scale. Jupiter tends to look like something slightly off white in a smaller telescope. Early space probes used only 2color filters and color was reconstructed from that. Color filters or color film don’t have this problem. Some are real photos, some have the color exaggerated, and some are false recoloured.
And those are just in the visible range. A lot of the really spectacular false-color images also combine infrared, ultraviolet, or x-ray images. For example, this image of supernova remnants (to pick an example from APOD) combines x-ray data from the Chandra observatory, as well as visible light data from the Hubble telescope.
Imaging non-visible wavelengths of light requires very specialized cameras, though the amateur can photograph near-IR and -UV with filters and a good digital camera.
False colour of multi-spectral data is common. Another place where you see it is in Earth resource satellite images. What the OP describes with “oxygen is coloured blue” is clearly yet more of this. What is missing in the description are a few steps that make it a bit more understandable what is going on.
When the description says “oxygen” it doesn’t mean oxygen gas as we know it. Rather it means a specific emission line of ionised oxygen (Oxygen III) has been selected for in one of the imaging devices and that particular channel has been allocated to visual blue.
Could you do this yourself? Yes. Appropriate filters are available for use for amateur astronomers. However you would need to combine the various channels you chose into the final false colour image yourself. Although there is software dedicated to this, there isn’t a universal one click solution.
You don’t need to taken an actual spectrum though, just 3 filters. False-color images are usually made by taking images through 3 different filters, then each filter is shown as red, green and blue respectively. The 3 filters can be broadband or narrowband (i.e. only transmit a specific emission line). Here are a number of examples taken by amateur astronomers.
There are computer-controlled filter wheels that “automate” this process. Although in astrophotography, “automatic” is a relative term; most of these images represent dozens of raw images taken over hours, if not tens of hours, then processed on a computer.
Well yes, but I was wondering if there was computer software that could take the image captured, isolate the spectrographic lines required, then colour them appropriately and present the image.
Seems like the answer is no, though the limitation may be on the capture device rather than the computer.
Certainly not all in one take. It would be possible in principle to make a “color” CCD with the filters built in, in the same way that we make CCDs that see in human-like colors, but nobody would ever do this, since it’d be a lot of work for little benefit to make one: If you’re doing serious science, then you want to use proper filters with a black-and-white CCD, and if you’re not, you won’t care enough about it to bother.
I was hoping you could get a CCD that is sensitive to a sufficient range of spectra and a computer program to sort it all out to produce the false colour images we love to see.
CCDs can be sensitive to the range, but what you want is selectivity. Each individual sensitive element of a CCD (or CMOS) sensor is sensitive to the entire visible spectrum of light and also extends some way into the IR and UV. To make a colour sensor, filters are added over the top of each element. Any individual element is typically sensitive to a sub-range of wavelengths: either red, green, or blue light. The individual elements are typically laid out in a Bayer pattern, so that software in your camera or computer can interpolate the individual element values and come up with a colour for each final pixel. Clearly the design is balancing loss of spatial resolution against the ability to resolve visual colours. In principle you could make the filter characteristics much tighter, and with more individual spectral channels. But this would come with the cost of reducing the spatial resolution of the sensor. In general, there is simply no point. There is no useful scientific value, and no commercial value either. When you move to multi-spectral work there is scientific value in much larger number of channels. Earth sensing satellites might have of the order of 100 channels. Or a specific scientific need will define some important bands relevant to the area.
An astronomer will typically use a raw CCD sensor - with no filters on the sensor elements (better known as a Black and White sensor) and take multiple images with appropriate filters over the entire image. Thus retaining maximum resolution and sensitivity. Since almost all astronomical subjects do not change in the timespans involved this is viable. The combination into a false colour rendition is done right at the end of the entire scientific process - indeed false colour images are usually not of quantitative scientific value - rather they are only qualitative or illustrative.
Ideally, what you’d like would be a full spectrum at every point in your image and at every time. Nobody has yet figured out a way to make an instrument that would do this, though, so you always have to accept some compromise or another. Basically, you can take 3D slices through the hypothetical ideal 4D “data cube”, and use what you know about the thing you’re looking at to make educated guesses about all the parts between your slices. You can, for instance, take images with a handful of cameras at once, with a filter of some sort on each of them: That gives you all your position information in both axes and all of your time information, but only gives a little information about the spectrum. Or you can have a slit spectrograph, which gives you detailed information on one position axis, time, and spectrum, but little or no information on the other position axis. Or you can scan your slit across the thing you want to image, which gives you full information about spectrum and one of your position axes, and information about the other position axis and time mixed together (it’s effectively a diagonal slice). And so on.