It sounds like we’re doing about all we can do with visible light (or are in the process of building it) for the time being. The bigger we make the array, the longer the light pathways have to be, which means larger overall size, higher cost, and potentially more errors.
Would’ve been nice if we could have turned, say, the Dakotas into one big telescope.
To get angular resolution you have to in some manner or other record the direction the photons arrived from. That means either using standard optics to form an image, or if we were treating light like very short radio waves, some form of holography. As Chronos mentioned that would require very precise timing control.
I like the idea of using quantum trickery to get around the classical limit on resolution, because for most purposes the timing of when the photons hit our detector isn’t relevant. If QM allows us to trade that information for better resolution, great.
Note that the Very Large Array radio telescope has antennas arranged in a “Y” configuration so that you get ~more of a two-dimensional range of data. The antennas are also on tracks so they can be spread out or tightly clustered.
I toured the UofA Mirror Lab when they were working on the mirror of this telescope, and saw the un-silvered mirror as it was nearing final shape. It was one of the more complicated projects they had ever done, since it’s a dual-curvature mirror.
Note that that array is a piker size-wise compared to the Very Long Baseline Array, which has ten 25 meter radio scopes located from the Virgin Islands to Hawaii. If they want better sensitivity, they combine it with other scopes, such as Green Bank in WV or Arecibo in Puerto Rico. It can also be connected to the European VLBI Network to make the ultimate (until we get one into space) very long baseline radio telescope.
One thing you can do with a distributed bunch of small telescopes is observe Cherenkov radiation in the atmosphere caused by cosmic rays. With a suitable array you can work out the 3D distribution of the shower or particles created as the cosmic ray (really fast moving particle) hits the atmosphere. This is the basic idea behind the fly’s eye observatories. However the telescopes are little more than light buckets with a handful of photo-multipiler detectors. They don’t image anything as a single entity. Nor are they arranged in a circle. But it gets close to what the OP was describing, and may have been the source of the memory.
The claim to fame of these systems is that it was the U of Utah fly’s eye that observed the OMG particle. (Another big group working in this area was here in Adelaide where I live, and I used to know a lot of the guys who worked on the project. They also worked with UU on the fly’s eye and HiRes Flys Eye.)
And in addition to cosmic rays, the Fly’s Eye has also been used for that “amplitude interferometry” I mentioned before, to study other celestial objects.
One of the optical arrays under construction mentioned in the link posted by markn+ (post #4) is the Magdalena Ridge Observatory Interferometer located here in New Mexico. In fact it’s only 20-30 miles east of the Very Large Array, but it’s at over 10,000 feet on South Baldy in the Magdalena mountains. The VLA is at ~7,000 feet in the plains west of the mountains. Here’s a link to the MRO main page.
The Fairchild CCD485 has 4096 x 4097 pixels. So probably not a typo. I haven’t worked with this CCD and haven’t read the data sheet carefully to find out. It may be because the readout electronics take some time to stabilize, so reading out and throwing out the first line can be beneficial. Some CCDs have extra non-active pixels in the shift register for this reason.
The article in the OP was probably talking about Keck Observatory, Keck
One way to model systems like this is as a single big telescope, where you cut away most of the mirror and are just left with a few pieces. Theoretically get similar resolution, but you lose light gathering capacity. So you have to collect hours of light instead of minutes to get the same number of photons.
Darren Garrison posted the links to active and adaptive opics. Too bad they didn’t have active on Hubble. The adaptive example I’ve heard of used a green laser. Fire a pulse, look at what gets scattered and reflected, then bend each mirror to get the laser into focus.
After Ligo, combining really distant telescopes should be a doddle.
Another cool type of telescope is to launch a mylar-coated balloon into orbit. Get the right shape for the balloon and you can just shoot a little package into orbit and then inflate your lens.
Nor does it need active, since it’s in zero g. If you’re referring to the problem with the mirror shape, that was obviously unanticipated, since if they had known about it before launch, the proper response would have been just to make it the right shape to begin with. And once the problem was known, it was much more easily fixed with a secondary optic than it would have been by making any change to the primary, especially a complicated change like putting in active elements.
This will never work.
It might be OK for spectral analysis, but the surface will never be even remotely accurate enough to use an imaging mirror. When mirrors are ground for telescopes, they are polished to 1/4wave of HeNe light. A balloon’s surface isn’t going to even be in the same city, let alone ballpark.
A balloon optic might also be good for photometry (measuring light intensity as a function of time), as you don’t need any image quality for that, too. In fact, back when I was doing photometry work, we deliberately kept the telescope a little out of focus, so we wouldn’t get a hot spot on the detector. And when you’re interested in how your source changes with time, you can’t just use longer exposures-- There’s no substitute for a huge honking light bucket.