"Do Androids Dream of Electric Sheep?"

“Do Androids Dream of Electric Sheep?” wow! It’s not only a great novel…(by Philip K. Dick) but it’s the psuedo-prequel to Blade Runner and amazingly profetic.(maybe!)

I had an excruciatingly long flight last week and a friend gave me this book for the flight…it really makes you spur a new thought process on what the future may hold. Lets suppose:

If we as a race advance enough, evolve enough, create enough…could we make AI units for Androids that practically make them indistinguishable from humans? Its an interesting thought but most likely quite far from our vantage point right now.

However, if you think beyond that, beyond Androids there will most likely be entire automated towns or ‘installations’ This may all be in our future. Why would anyone try to portray a future where Computers take over everything, such as in the Terminator movies? What sophisticated program would ever ‘think’ a completely automated, human extinct, planet would be of any benefit to them. It is not like you can program a computer to want to have fun, or even understand what fun is. And a planet devoid of humans would be of no benefit…

How could computers evolve to thier opimum potential? Or Better yet, what is a computers optimum evolutionary ending point?

Even for those of you who have never read the book…What do you think may be in our Future??

Alrighty then.

I’m just after reading an old story called “The Jameson Satellite”, by a guy called Neil R.Jones (found it in an anthology of creaky old sci-fi stories called Before the Golden Age, edited by Asimov). It was written about 1931 and features any number of glaring scientific inaccuracies, but basically it’s about this guy who arranges to have his dead body launched into orbit in a satellite, so he’ll be preserved for as long as possible. He orbits happily for forty million years, and then is found by the “machine men of Zor”, who were once organic lifeforms but long since transplanted their brains into robots, and promptly do the same for him. Earth is dying; humanity is gone.

What really frightened me about the story was a small section just before the end, in which our hero, already a brain inside a machine, is exploring earth with his new mates and falls accidentally down A Big Hole. His metal body gets smashed up, but his brain survives… and although all communication is by telepathy now his lips are gone, he can’t seem to communicate with his buddies. There hangs before him the prospect of an immortal, dark silence, in which he’d be left forever, undying, unalive.

Spooky, huh? If we do this thing we’d best get it right. Immortality sounds a bit boring down A Hole.

Take a functioning machine intelligence, who is able to learn, reproduce, repair itself, and obtain the resources necessary for those activities without human assistance–why would that be of no benefit?

For an intriguing, sometimes silly, take on “optimum ending points”, I enjoyed Frank Tipler’s “Physics of Immortality”. If the universe ends in a Big Crunch (which isn’t the most likely outcome, from what I gather from layman’s cosmology these days), every piece of matter, all useable energy, in existence is a continuously evolving, increasingly unified system, culminating in a self-aware, effectively omnipotent Omega Point that is timeless, infinite, and will resurrect the dead–all the dead. In less secular terms–God. It features cheerfully wacky theology, a whole lot of arguing a conclusion by taking that conclusion as a given (dressed up as “boundary conditions”), and showing that nested exponents can be used to show really really big numbers.

Well Actually, a functioning machine intelligence, who is able to learn, reprduce etc…etc… would evolve into what? Where would this machine intelligence ‘hope’ to get. they will always be synthetically manufactured, or robofactured machines. They would most likely reach the conclusion that they can only evolve so much. Depression would probably be their first synthesized emotion. Who really knows?? It is interesting to think about though.

As for Frank Tiplers “Physics of Immortality” I agree that was a great book. Full of wacky theology. And an interesting twist on the big crunch…

Fritjof Capra’s “The Tao of Physics” was also a pretty brilliant work. Especially if you like theorizing about modern religions and post modern ideas…

The movie Bladerunner is (loosely) based on that book. It is not a prequel.

An interesting thing in the book was the fact that after he ‘retired’ an android a ‘bone marrow’ test was done to make sure it was an android.

I really don’t think these types of procreating robots are in our future.

The question is what rights would an android have? Do we have the right to just destroy them when ever we want? When would they stop being our slaves and have rights of their own?

Well, I read a short story by Mike Resnick a few weeks ago called “Death Is an Acquired Trait.” Nothing about AIs, but it does deal with an alien race that finds a way to turn themselves into pure energy and live forever. The only problem is that after the first couple billion years, it gets insanely boring. They’ve tried everything they can think of and there’s no way to kill themselves or each other.
The only thing they can do is go around to other alien cultures and sabotage the work being done on their planets which leads to the creation of immortality.

So, if AIs were alive, if they were immortal and had true emotions, would they become bored? Would they become suicidal? If we’re talking about true intelligence, we’re not always talking about an insatiable curiousity to accompany it. We’re not even guaranteed motivation to go along with intelligence. Programming those in just highlights the Artificial in Artificial Intelligence and decreases the liklihood that those emotions are “real.”

Oh, and Ross, nice use of capitalization:

One can only hope the AI lost his olfactory sense during the fall.

This is one of the biggest misconceptions people have with regards to evolution. There is no goal in evolution. The first single cell organisims on earth didn’t hope to someday be dinosaurs or humans.

I think that if we were ever able to build a near perfect simulacre, they would probably be hard coded with somthing along the lines of the 3 (or later 4) laws of robotics as set out by Asimov.

Also, if you find that you like PKD, you should read his short story The Electronic Ant which is about an android who is dealing with the Dickian issues of the perception of reality.

You rude Ender. That certainly wasn’t the sort of hole I meant; and I’m quite sure it wasn’t the sort of hole Mr jones meant, either. He’d have been thinking of a gracefully designed gallery filled with images of The Queen, whoever she might happen to be 40 million years hence.

Recently came across a mind-blowing old BBC radio show that will resonate with Dickheads (Contact with an intelligence from the Sirius system that eventually manifests as an incarnated Sophia that sets about initiating mystical consciousness in folks.)
I suspect that the author was partly inspired by PKD because when they first talk to her, they open communications by saying “Computer:Access Code Philip.”

(The plot, for people not obsessed with PKD’s work, contains similar Gnostic-inspired ideas as Dick put forward in VALIS.)

For the enjoyment of my fellow Dopers, I have made a MP3 of this program available on my webserver at
http://www.mudd.dhs.org/Project_Genesis.mp3

Beware, the 85 minute long show weighs in at just under 40MB, so modem-people are probably out of luck.

I am a huge fan of PKD, and I’m one of those nutjobs that thinks he wasn’t any crazier than the next guy-- Just more perceptive.

PKD is incredible. I saw the general thesis of this book to be more about people than androids: what makes us human, or what makes us special? Dick posits that this is the ability to empathise. If you read a lot of his stuff, you will likely notice this as a recurring theme.

Androids seemed more like a plot device to raise the issue in this story.

Cool thread. I’ve got a few comments after having read through:

If Moore’s law holds true, within a relatively short period of time, computers will have more computational power than the human brain. If you are able to make a computer that can think on it’s own, better and faster than a human, there is no telling what it may do. It’s actions would not neccesarily even be able to be understood by us. Transhumans call this the Singularity. Check out definition and cool discussion here http://www.kurzweilai.net/meme/frame.html?m=1

adam yax said:

I think this is not a definite. Is it impossible that life was seeded here by some other species? They could have designed the earliest organisms with the intention of them evolving into ??? whatever we end up evolving into. If this were true, it could be said that “biological evolution” has a goal. We are just unable to grasp it.

Just so we’re clear, I only say this could of happened. I do not have any evidence that it did, nor that it did not. I do not believe one way or the other. I do not know. But I do not consider it impossible. I hope someone figures out what it takes to make biological life soon. No one has been able to mix up a bunch of non-living materials and poof get a living creature.

A rougher question is if we make machines better in every way than humans and they take over, can we expect to be treated any better than we treat the lower species? Cages, experiments, food (matrix-style)??? Hopefully it turns out to be a partnership instead of one dominating the other.

I doubt that biology will totally rule over machines forever, and I hope machines don’t ever rule over biology. More likely they will merge. Hardware and wetware lines will begin to blur and we will join into something better than either one is alone. I hope.

DaLovin’Dj

Hmmmm, so this is how the Borg got started.

First, I don’t know Moore’s Law, does someone want to explain it to me?

Secondly, computers already have the ability to outcompute us, but it will be a long long time before they’re able to out think us.

I mean, you take a problem like 123875321*327517615 and maybe one person in a billion could compute this faster than a computer could.

But think about all the equations we do solve for every single second without ever thinking about it:
[ul]
[li]Braking a car with enough room to not hit the car in front of us or go into the intersection[/li][li]Catching a ball as it’s tossed to you[/li][li]Walking[/li][li]Typing[/li][li]Dressing ourselves[/li][/ul]

All of these require complex calculations involving calculus, physics, spacial reasoning and simple geometry. We don’t give it a second thought though while we’re doing the activity.

There are a number of things that we can do almost by rote that would confuse the hell out of an AI. This isn’t to say that it wouldn’t be possible to progress the technology to the point where AIs were smarter than humans in every way, but right now I see this as being a long way off.

Enderw24:

Moore’s law is kind of a misnomer. It’s not really a law as much as a theory. Here is the definition: http://webopedia.internet.com/TERM/M/Moores_Law.html

Absolutely. That is why current computers don’t fit the bill. It has been debated about what the true processing power of the brain is but there are estimates. If Moore’s law (which is exponential) holds, regardless of what that number is, computers will eventually surpass it. Then we just need to write the software. AI software is an area of heavy research these days.

DaLovin’Dj

What you are describing is know as “intelligent design”. In many cases the torch for intelligent design is carried by the creationist crowd. I see it as a flimsy attempt to pass off creationism as a science. Of course that is IMHO, but I don’t think you find too many mainstream scientists who subscribe to intelligent design theory.

I also need to ask, how do you see what you describe working? If there is a ‘goal’ to evolution, why is it taking us so long to reach it? Why aren’t there major evolutionary changes every generation or two?

Larry, I probably download that when I get home, it sounds cool.

Well, I’ve got to go, I being flashed by a pink light.

adam yax wrote:

I think it’s worth mentioning that what Asimov seems to be up to in I, Robot is providing elaborate counter-examples to what are supposed to be straightforward, simple and comprehensive ethical rules. To actually turn around and use those rules would be missing the point.

adam yax:

I am not a creationist. I accept I don’t know. But I think (hope) the answer will eventually be known. Just because it is unpopular as a theory does not make it true or untrue. There are many different theories, and alot of work needs to be done to figure out which are correct. That said, ponder the following scenario:

Genetic science advances to the point where the scientists are able to completely write in the genetic language. They are able to create entirely new living creatures with entirely new traights. They build a bacteria that when exposed to pressures engages in biological evolution. Unfortunately, the speed of light was never able to be broken. To go colonize the Universe will take so long and cost so much, it just does not make any sense and we do not do it. We aim all of our advanced telescopes and detectors all over the universe and find it to be a cold dead place. We are the only life. We want to spread life. The cheapest way turns out to be to make trillions of trillions of these bacteria and shoot them out in every direction (or towards specific star systems).

Is it so hard to concieve that we could decide to do it? If genetics work they can be understood. If they are understood they can be used. I don’t doubt that we will soon be able to build new living things that do not exist naturally. Now if we can do it, why not someone else?

Either way, whether intelligently guided or not, Panspermia is a good bet (especially given the recent space bacteria finding). But these are just THEORIES. Theories that can be tested. To just throw it out and call it bad science is biased. Until you can give me an experiment that shows how life came to exist here, I will withold judgement on how it ever happened. Panspermia seems just as likely as random evolution from inanimate materials. It is truly one of the great mysteries.

DaLovin’Dj

I think that it’s possible to intuit that there is purpose and design behind evolution without lumping in any of the patently ludicrous ideas of the Creationist crowd, (eg: anthropomorphic deity, literal interpretation of bible stories, etc.) As for mainstream scientists, I’m reminded of Uncle Albert’s statement “Science without religion is lame.” (Sounds kind of funny with the current idiomatic usage of “lame.”) The creationist folks should listen to the second half of the epigram, “Religion without science is blind.”

It’s cool beyond words. I had up to three people downloading simultaneously last night. If anyone has trouble getting it off my web server, let me know, and I can post it to alt.binaries.sounds.radio.misc

***FNORD! ***

dalovindj says:

Huh? What are you trying to say here?
Are you postulating that we can not make something from nothing? Or are you saying that we cannot take synthetic materials and get a living being?

Either way the basic premise of my post was to ask what could an android ever hope to evolving into? It seems to me that a “non-living” entity could not hope for much. I mean computers could never hope to be programed to ‘feel’ emotions like a human. It goes along the same premise of old school AI research, where you could never program a computer to build a birds nest…Because the sticks on the floor of the forest were never intended to be used as a nest. Nor was, bits of yarn, paper, rope, string etc…etc… You get the picture.

So where does this leave us? What would/could an android ever hope to evolve into? thats the quintessential question…

Umm, dunno, offhand. Maybe you could jump-start my speculation generator by telling us what you think humans are hoping to evolve into.

That’s also the weakness I see in dalovindj’s PanSpermia example. I don’t think the “seeding” species would be likely to program an evolutionary end point into the bacteria they send out. It seems to me that even if it could be accomplished, it would run counter to the intention of sending the bacteria in the first place: (presumably) to have the bacteria enter as many environments friendly to evolutionary development as possible, and let them develop as far as the environments and their abilities to adapt (and to modify the environment) will allow.