My assumption is that, if you’ve got the technology to easily travel between star systems, then you also have the technology to easily produce all the resources you need from hydrogen/asteroids/etc.. Also, I assume that, if you can easily travel between the stars, you don’t really need planets for your people. You can just build space stations.
What would there be to make war over, if you can just take all the extra asteroids/moons/hydrogen in your solar system and make space stations and food/fuel from them? Why go elsewhere to fight?
The AI for whatever reason have decided to exhaustively catalog all planets.
The AI has at some point in the past visited earth and noted it as habitable.
The AI has left a cloaked/hidden probe of some kind behind to observe and report.
The AI can only travel at half the speed of light.
Let’s say the probe beamed back a “civilized life” report once it detected the first radio waves coming off of our planet about 90 years ago. Let’s also say that the nearest AI mothership or whatever is hanging out a hundred light years away (a tiny tiny distance in galactic terms).
That means that they’ll get word of our existence 10 years from now and they’ll be showing up 50 years after that.
Unless part of this thought experiment posits a strong AI presence literally squatting over every single habitable planet in the galaxy and just waiting for a moment to go “HEY GUYS WHAT IS THE HAPS?” there’s no reason whatsoever to discount that they either haven’t received a signal yet, or they have and they’re still traveling through space to reach us and inquire as to the status of our haps.
I think that we’re anthromorphising potential A.I. here.
Why would we want to add emotions to intelligent m/cs in the first place ?
If I wish to design and build a m/c to work out my tax, build me a house and run my company for me, for example, why would I design in, an ability to feel happy, sad, bored, angry etc . ?
And why do we assume that consciousness would automatically develop these qualities ?
Just because we did doesn’t mean that its a given that other intelligences based on different chemistrys and from completely different environment do.
Even here on Earth there are many people who have no interest in events outside of their own community, let alone whats out there in space, let alone catalogue every planet and star in the universe.
So why do we assume that A.I.s would want to ?
(Personally I’m totally for that myself, but I’m not everyone)
It is in effect a strawman theory.
There may or may not be A.I.s, but the theory is flawed and has more to do with assumptions then the actual state of the universe.
Such an answer has indeed been proposed: basically, a kind of large-scale selection is at work; anything that attracts too much attention, doesn’t do so for cosmologically significant times, due to being, well, selected out.
As for the OP, I too think that at least part of the problem is the assumption that extraterrestrial strong AI would be all that interested in us, or that if it were, we would even be able to notice/recognize them (relevant XKCD). If such a thing as the singularity, and post-singularity superintelligence, is indeed possible, then we should not expect to be able to predict/understand either their motivations or actions, so any sort of speculation is built on shaky grounds.
That said, now engaging in such speculations myself, I think that there may not be sufficient incentive for space-exploration, simply by virtue of the fact that, sufficient computing power assumed, they have a much larger realm to explore, and can do so effortlessly: the realm of all possible worlds, accessible through simulation. Put differently, I think that strong AI may well be ‘inward-directed’, rather than outward, towards space and us.
As some flimsy evidence, consider the way technology has evolved in the past 40 years. Somebody claimed that there hadn’t been any advances as significant as cars, or planes; I disagree, however, those advances have not been on the surface, but rather, in the way people communicate, manipulate information, and arguably even think. Physical locality is becoming less and less relevant; most members of the younger generation essentially carry their friends with themselves at all times (and startle me on the train when they suddenly strike up a conversation with someone who isn’t there).
You get stuck in a world with 2002-level tech forever.
You live the rest of your life in 2012 forward as it will actually be, with one wrinkle: no indoor plumbing. No running water, no flush toilets. Wells and outhouses instead.
Which world would you choose to live in?
Me, I’m going with option 1. Indoor plumbing is pretty huge. I’ve got another 30-40 years coming to me, good Lord willing and the creek don’t rise, but I would bet without hesitation that no toy is coming along during that time that would balance out 3-4 decades of carrying water and crapping in the outhouse.
Nice thread, nice adaptation of the Fermi Paradox.
Sam Stone’s point. I find it counter-intuitive that D<1, but I can’t rule it out.
Space is big, aliens can’t visit, and Singularity isn’t awesome at all. Handled in the OP.
Space is big, aliens are monitoring us, they use probes attached to comets, why the heck would they bother dropping by and shaking hands? A: To read our books. That may or may not be sufficient incentive.
Space is big, Singularity is awesome, it’s so awesome that the aliens live in the Matrix/soma-land and have lost interest in the outside world.
Space is big, Singularity is awesome, it’s so awesome that they can monitor as much as they want without sending-oh-so-primitive “Mass” in our direction. They use dimensions 6 and 11, accessible with a reactor made of an Adamantium/Unobtainium alloy and gobbledygook. This sounds silly but if they could read our books without visiting us and doing so is cheap then they might not bother parachuting in.
They are waiting for us to develop further; they have found that there’s a more fruitful exchange of knowledge when [del]the meat is tenderized[/del] civilization has developed further. ETA: Evil Captor’s idea. ETA2: I was riffing off of V Vinge.
I dunno, 1945-1970 was an era of peak economic growth among advanced countries and technological advance. My take is that tech advance re-accelerated around 1990.
Kevin Drum: Why artificial intelligence is closer than we realize. Suppose that in 1950 the fastest computer on the planet had about a trillionth the computing power of a human brain, and suppose also that computing power increases 1000x every 20 years. Here’s what things would look like:
In 1950, true AI would look like a joke. A computer with a trillionth the processing power of the human brain is just a pile of vacuum tubes. In 1970, even though computers are 1000x faster, it’s still a joke. In 1990 it’s still a joke. In 2010 it’s still a joke. In 2024, it’s still a joke. A tenth of a human brain is about the processing power of a housecat. It’s interesting, but no threat to actual humans.
So: joke, joke, joke, joke, joke. Then, suddenly, in the space of six years, we have computers with the processing power of a human brain. Kaboom.
Here’s the point: technological progress has been exactly the same for the entire 80-year period. But in the early years, although the relative progress was high, the absolute progress was minute. More at the link.
You’re assuming that it’s all about the hardware, that if you just put enough connections together, you’ll get an intelligence. What if it’s all about the software, and we don’t know how to write it? We’re nowhere near even understanding how to go about starting to write software that would result in a strong AI. And software technology does not advance like hardware. The only reason for Moore’s law is that silicon scales very well by going smaller. Software is not like that at all.
All we might get for our billion times more powerful computer is really fast real-time graphics rendering and all kinds of speed for running sophisticated simulations, and be no closer to creating true artificial intelligence than we are today.
Also, it’s a mistake to think of neurons as transistors or the brain as a simple computing machine. There’s a hell of a lot going on in the brain. There are many connections to each neuron. Analog signals are part of the mix, as are chemical signals from the body. We are just beginning to understand how some of it works.
We’ve never created an intelligent computer. Until we do, we don’t know if it’s even possible. That’s also an answer to the Fermi paradox.
With sufficiently powerful computers, we could just model a physical meat-and-chemicals human brain - and grow an AI inside it (in the same way that natural intelligence grows in a natural human brain.
And once we got there, it would afford the opportunity to experiment upon exactly which bits need to be subtle and wobbly, and whether any parts can be simplified, changed or eliminated, whilst still achieving the same sort of end result.
Unless there’s something about an intelligence in a meat brain that can’t be described.
Tweak the story a bit and change the criteria from “Speed” to “Computer Game Quality”. That may sound silly, but that’s where some of the biggest advances in AI have come from after all. Has computer game quality grown exponentially? Pong was created 40 years ago. How much better was 1992’s Mortal Kombat? Was 2011-2012’s Kinect another proportionate advance?
I’m not a gamer so I don’t have a good fix on this. It’s pretty hard to compare anyway - pineapples and submarines. Also, Sam’s implication is that we won’t know whether we might hit some very difficult research-resistant challenges at some point. Software development is pretty medieval in many ways after all. It’s not clear whether we have exponential progress or even what that means.
On Mangetout’s point - that’s way beyond my knowledge base.
There are actually very good theoretical proposals for universal artificial intelligence, centered around the idea that a universal intelligence mainly has to be a universal problem solver/predictor (of its environment). Thus, given any problem, the task is to find an optimal strategy to solve this problem, given certain constraints formalized by utility functions. This can be formulated in a mathematically exact way, and solutions to this optimized-problem-solving problem can be found: one is Marcus Hutter’s AIXI agent, which uses Solomonoff induction (basically, a formal theory of observation-based prediction); another is Jürgen Schmidhuber’s proposal of ‘Gödel machines’, capable of dynamically rewriting their own code upon finding a proof that the rewrite is optimal. And we already have computers capable of discovering natural laws by themselves…
I’m a gamer, and frankly I am completely underwhelmed by the “improvement” in AI’s over the years, tho a lot of that is likely because devs put a lower priority on crafting and improving it.
If there is one, there’s likely (by the same argument) to be more than one.
If there are multiple AIs out there, that’s bad news, because then there’d be competition among them, in which case the nasty ones would survive. That is, the ones that survive would be the ones that put survival as their top priority.
Another good sci-fi story line!
Anyway, here’s another criticism. The super intelligent AI knows that its likelihood of surviving long enough to reap the rewards of a galaxy-wide search are too small to justify the expense of the endeavor.
After all, super-intelligence would realize that intelligence is inherently unstable. It’s too unpredictable. The smarter it is, the less predictable.
(Admittedly, my argument here is flawed; it assumes a “one container” limitation. Instability isn’t a problem when the container is essentially limitless, as long as the intelligence can spread out.)
Well, perhaps all the von neumann machines are busy duking it out at the center of the galaxy and don’t have the spare capacity to come looking for us. After all, the war between the decpticons and the autobots didn’t reach earth until 2007 and that war had been going on for a LOOOONG time before Megatron came to earth looking for the allspark.
Forgive me if this is covered, but what if we’ve been already incorporated into a “singularity” and we just don’t know it? What proof is there against this idea?
No, it just means time travel isn’t easy and/or that all those people from future don’t give a rats ass about the late 20th/early 21st century. Or they are here already and we can’t see them. Or the future hasn’t happened yet for them to travel into the past.
All those are variations of why the aliens are buzing our suburbs either.
Well, it is a messy, chemical, analog process with lots of variable to be tweaked. Simulating that digitally may be a very formidable step. Particularly if we have to design the simulation rather than grow it.
We might have computers capable of being brains in the not too distance future. The software/simulation to do it IMO could be very much further down the road at the very least.