Sam Stone
Well, since no one has attempted to answer your question, I’ll give it a try.
Yes, from the little research I was able to do (wikipedia and NASA sites) it seems they are trying to make long base line optical interferometry a reality. Among other things, it might be capable of revealing stellar surfaces, planets orbitting those stars, etc.
As far as the theoretical resolving power of an optical telescope, Dawe’s Limit states that it is about 1 arc second for a 4.5" telescope. So, a 1 foot scope would have a resolution of about .38 arc seconds and telescopes seperated by 2 miles (about 10,000 feet) would have a resolving power of .0038 arc seconds. From what I remember about Dawe’s Limit, it applies to the shortest wavelengths of visible light (near ultra violet) so if observations are done with wavelengths near the red end of the spectrum, the resolution is twice as bad.
Another deceptive feature of Dawe’s Limit is that it applies to a telescope’s ability to seperate 2 very bright point sources. So, trying to resolve dark features of an object would be several times worse than the Dawe’s Limit calculation.
Also, if telescopes (optical or radio) are seperated by huge distances (let’s say 2 miles) it is not the same thing as having a 2 mile telescope. It has the resolving power but not the sensitivity or the light-gathering power of a 2 mile telescope.
I haven’t tried to answer this, because I don’t really know what the answer is. It’s much more complex than wolf_meister’s analysis (although what he says plays a part. If you simply make a glass lens really big, you’ll eventually hit a point at which the light won’t refract where you want it. At that point you go to Feresnel systems and catadioptric systems, which combine reflection and refraction, and which have their own problems. You get complex diffraction problems that I’m not familiar with.
if you do synthetic aperturing, with multiple separated sub-mirrors, then you have interesting diffraction effects, and you have to be certain where the submirrors are located. Not knowing would add to th uncertainty. On top of which, I believe that it may give you a much bigger cutoff frequency, but without coverage between the sub-mirrors, your effective MTF in the mid-range will be full of holes, or at least a lot worse than diffraction-limited. You might get really good resolution at one very high spatial frequency, but garbage elsewhere. And I suspect that at some point that reduces to a delta function effectively lost in the noise, and so doesn’t buy you anything. But I haven’t heard enough or calculated enoug to put numbers on this.
Thanks. I was familiar with the limits of optical telescopes, but there have to be other limits when you get into gigantic sizes, as CalMeacham points out. Because otherwise, we could theoretically build telescopes so large we could read license plates on planets orbiting other stars (assuming they have license plates…). I suspect that long before then we’d reach some other major limits. Maybe scattering by interstellar dust, or some fundamental limits.
Or maybe ET isn’t radioing us, because they’re sitting around at home eating whatever aliens eat for popcorn and watching us on AlienVision.
Incidentally, it may be worth noting that you wouldn’t be able to get on a spaceship and go to another star so you can watch the dinosaurs. Since you’d be moving slower than light, no matter how far you went, you’d be seeing light that left Earth after you did.
I thought I should offer a little more analysis.
Here’s a good explanation of optical interferometry and how it contrasts to radio interferometry: http://www.phys.unm.edu/~duric/phy423/l16/node1.html
The resolutions they calculated are about 2 times better than what Dawes Limit gives them.
The article is from 1996 but at least it gives you a rough idea of calculating the resolving power of an optical interferometer.
One thing I always wondered was if we could use baselines thousands of miles long for radio interferometry, why couldn’t we do the same with 2 optical telescopes? Well it seems that although the resolution of optical interferometers is roughly 100,000 times “sharper” than radio interferometers, the synchronization (to observe the interference patterns) in 2 optical telescopes is much more critical.
I seem to recall this sort of “looking into the past” as a plot point in one of Frederick Pohl’s “Heechee” books. Some event happens, and it results in a wtf was that, so they ftl a couple light-hours away and pull out the scope to look.
In the fine animated film The Flight Of Dragons, some one says “I can stretch out my hand and pluck the sun from the sky!”
Our hero replies “Wrong. Magic or no magic, you’re grabbing at the wrong place. That isn’t where the sun is. It’s where the sun was eight minutes ago. It takes that long for light from the sun to reach earth.”
In Murray Leinster’s First Contact, an FTL ship is able to record a few billion years of the development of a nebula just by approaching in the right pattern.
Umm so, do I understand correctly that by the time the image from the screen reaches me, my CPU will already have gone through a couple of cycles since the image left the screen?
To help put this in perspective, consider this: The huge baseline radio interferometers (VLBA, and there might be others I don’t know of) can’t get enough network bandwidth for their data on an electronic network. The information from the different scopes is brought together on big tapes (or probably DVDs, now) via Fed-Ex. It’s hard enough to synchronize two signals over a fiber-optic line or a wire, but when you’ve got a day’s error bars just on when a packet will arrive, there’s no way you’re going to be able to synchronize visible-light signals. Fortunately, though, there’s also not as much need for interferometry in the optical band, since you need a much smaller instrument for the same resolution.
Frylock, you understand correctly, assuming you have a gigahertz processor. This actually has major implications on chip designs, since the time it takes a signal to even propagate across the width of the chip is comparable to the clock time. I understand that it’s a major challenge to design a chip archetechture which can handle that (doable, of course, but very hard).
Of course, one needn’t be dealing with gigahertz processors to run into speed-of-light issues–a wide network can be impacted as well … as in the case of the 500-mile e-mail .
It’s also worth noting that the substantially-slower-than-light-speed Voyager 2 probe, launched in 1977, passed Pluto’s orbit in 1989. If a man-made object launched using conventional chemical rockets can get there in only twelve years, photons bounced off of a dinosaur are going to be considerably more distant at this point.