Okay, I didn’t read the thread, but let me be the first to say it was really strange seeing Q.E.D. replying to a question…took me a second of thinking “Wait? What? Didn’t he…? Oooooh, it’s a zombie”
This si not correct. At present, we derate electronic components by 50% (of rated voltage), and 75% (of rated power). Under these conditions, transistors, diodes, ICs, should operate reliably for centuries.
The real problem in the power system-plutonium based thermoelectric generators are not good enough (yet) to last for 500-1000 years.
This is simply not true, especially for integrated circuits, and particularly in an unprotected cosmic radiation environment.
And never will be. RTGs us the heat from decay of the fuel elements, which decay in an exponential fashion. The energy declines in a 1/2[SUP]n[/SUP] rate, where n is the number of half-life intervals. For an RTG that would last centuries, you’d require an enormous amount of material in order to generate enough power at the end of life to still be useful. It’s not a limit of technical maturity; it’s the basic physics of radioactive decay that limits the lifespan.
I can’t find a list of stars by distance that goes out that far, but epsilon Eridani, epsilon Indi, tau Ceti, and Groombridge 1618 are all within 16 lightyears, are all G or K stars (and thus probably more likely to support life than an M star like Gliese 581), and are all single-star systems (epsilon Indi has a couple of brown-dwarf companions, but those shouldn’t present much problem). The number is sure to expand further if you extend the search out to 20 or 25 lightyears, and there are dozens of other red dwarfs in that range, too. And the number is even greater if you allow that life might form in a binary or trinary star system, or around a star hotter than G. It would be extremely premature to assume that there is no star which is both closer and more interesting than Gliese 581.
I’m just saying that before humanity devotes the incredible resources necessary to send out a interstellar probe we would first need to do our homework. We need to maximize the information we can gain on other star systems from here, so that when we do send a probe we’ll be really, really sure we’re choosing the right target.
Centuries? No. Decades? Yes, with shielding and redundancy. I’ve spent time with Air Force satellites designed to survive in nuclear-warfare environments. Perhaps not as harsh as interstellar space, but close enough for purposes of the discussion. The redundancy built into those systems is impressive, far better than even deep-space NASA probes.
That said, they aren’t fundamentally that complex. One of the challenges in recent years was trying to develop a military communications satellite using IP-based comms, but hardening something like that from the simple near-Earth background environment is tough. But again, for the OP, how complex are we talking about? Not all that complex a probe.
Out of all the challenges listed in the OP, hardened, durable, redundant electronics isn’t one of them, *as long as *there’s enough power to accelerate that much mass. We’d just have to accept that we’re building a probe that won’t be performing a million operations over a lifetime of ten years in orbit, but ten thousand operations after a hundred-year journey. That is acheivable, even with current technology. It’s just really freakin’ expensive.
Trouble is, fifty years after we launch our mega-expensive probe, we’ll have computers even better, smaller and more survivable with an even faster ride to the stars. And fifty years after that, even better. The question is, why expend the resources now when you can wait a little while and expend a lot less resources later and get more bang for the buck?
An aside: the ideal solution today, and in the near future, is investing those resources in better and bigger space-based telescopes. I’m willing to bet my vast fortune we’ll be able to take amazingly sharp pictures of planets around Alpha Centauri and other nearby systems long before we have the technology to easily visit them.
I’ve worked on reliability issues with a National Lab. They build their own ICs because standard processes won’t survive the kind of environments they will put them in.
The probe is going to have to be very complex, and very smart. When we send something to an outer planet, all we need are some basic instruments and a camera, and we can do most of the decision making ahead of time. Our probe will arrive at a star system we know very little about, and might have to determine the existence of and explore unknown planets and satellites. So, we are going to have to have a big processor built with antiquated but reliable technology, with lots of redundancy, and with strict weight limits.
The systems you mention must survive brief radiation events, these ones will have to survive decades and centuries. Even ignoring radiation problems, silicon is not static. Things are better if you power them off, of course, but something needs to be powered on, and that something will suffer all the reliability issues any powered circuit suffers. A quick search didn’t reveal any papers on silicons reliability over decades. There may be failure effects we haven’t even seen yet. The stuff I work with, which is in the commercial sector, doesn’t make it anywhere near the end of our reliability bathtub curve before getting junked for obsolescence. Even older systems like B52 and telephone switches get stuff replaced.
Even more than that, it is going to have to be smart enough to make the same kinds of decisions made by teams of experts in trajectory analysis, planetology, stellar astronomy, et cetera, all wtih complete autonomy, and any of which may, if a problem or threat is not properly identified, may result in premature termination of mission. The Voyage probes, and even more recent missions like Cassini or the Mars Exploration Rovers, are as much a testament to the ingenuity of the mission teams and science and engineering support in coming up with novel ways of making the probes operate longer, better, and with more flexibility than the design specifications. You wouldn’t have this with an autonomous probe. The technology is just way beyond the current art of machine intelligence by a long shot.
Sure, that’s no problem. Well, assuming that whatever system you use for course corrections (at minimum, you’d need a telescope, a simple computer, and some way of turning a rocket on and off) doesn’t break down somehow.
Even without any course corrections beyond our own solar system, you could get close enough to pass within the orbits of the star’s planets. But that’d only be good enough for a flyby.
Now I ain’t no fancy-pants rocket scientist, but it seems that the main problem is the reliability of our equipment being insufficient. The obvious solution being to get there quicker, which would be impossible with the chemical-explosive type engines.
So the key would be developing something that could provide sustained high acceleration resulting in a very high speed. A solar sail, perhaps, or the radioactive explosive fuels mentioned above or something like this, which could be sustained, but isn’t very powerful.
I’ve never found this argument against building an interstellar probe as soon as it’s possible to get there in a reasonable amount of time (a few centuries or less) convincing. Usually, it has to do with the possibility that better propulsion technologies will come along and we’ll be building probes that fly right past it, but I think it applies to better computer systems, as well.
Why don’t we just hedge our bets and send a probe as soon as it’s reasonably possible (which is probably a long way off, in any case)? If better technologies come along, great.
But what if they don’t? We’ll just be sitting around forever waiting for the “right time” to go.
I think most people don’t realize how great a distance 20 light years is.
Problem with sending probes is, it will take 20 years to communicate with it and vice versa, assuming it ever reaches its destination and signals can be exchanged.
Sending humans will take generations in ship time and its almost a suicide mission, but it feels good to fantasize.
Its like floating a message in a bottle from California hoping it would reach Hawaii. Even with supercomputer ocean current calculations, odds are very slim, too many variables.