But that’s rather the point; technology will eventually reach the point where it can change us; or if we choose not to change, it will also reach the point where it will go beyond us; leave us behind. And saying “no” isn’t much of an option, since any culture that refuses to advance will be overwhelmed by those that do.
But then the example with the Microsoft Antitrust suit wasn’t a very good one, because humanity hasn’t changed much since the 15th century.
Correct, it wasn’t. About the closest historical parallel to a Singularity (although much slower) would be the transition from pre-human hominids to humans. A species that hadn’t invented language couldn’t understand most of our civilization or our thoughts.
Personally, I think at some point we’re going to reach the bottleneck of human willingness to change. Technology might be capable of further advances but people just won’t be interested.
Way back, technology was virtually stagnant. You dealt with the same technology that your grandfather dealt with.
Then there was generational technological progress. Your grandfather wouldn’t recognize your technology but you grew up on it and were used to it. Change was happening but no individual was directly experiencing it.
Now we have technological progress within a single lifetime. People learn technology with the knowlege that what they’re learning will probably become obsolete in their own lifetime and they’ll have to learn its replacement technology. People now, for the first time, have individual experience of technological change.
So it becomes an issue of how long the cycles are. People accept the idea that the technology they’re using in 2010 will be obsolete in 2020. But they’re going to be resistant to the idea that the technology on 2010 will be obsolete by 2011. And they’re going to outright refuse to accept any claim that the technology of January is obsolete in February. They’re just going to say “We don’t care if the new stuff is ten times better than the stuff we just learned how to use last month. We’ve just learned how to use all of that stuff and we don’t want to have to go through that whole learning process again so soon. Especially when it’s going to happen again the month after that.”
The Soviets also wanted a new kind of man. Homo-Sovjetis. As a theme it would be interesting only in a strictly speculative way. What’s to say other than, that while I would regret the demise of humanity, as for the new species taking its place, if they are not human then it will be as interesting as to speculate about as the housing complexes of extraterrestrial aliens or mating habits of crabs. Not very interesting at all. I can’t remember ever having read a sci-fic book without humans. Can’t say it would interest me very much either. I’m horrible racist. I care only for the human race.
All the computing power in all the universes ain’t worth diddly if you can’t do much of anything with it.
Without real world Apps such as nanotech effective genetic engineering and the like all you’d have would be a building you can point out to your children as you drive past and say “Look, there’s the SINGULARITY!.” No doubt that would cause the kids to get all reverent and well behaved, for about 20 seconds.
There appear to be two ways of looking at history - in the West, we tend to look at history as one of incremental progress - in short, things get better & better all the time; though there may be local dips in civilization (like the 'dark ages"), overall technology and thought are preserved and grow. Other traditions tend to look at history as more of a cycle - civilizations (or dynasties) start vigourously enough, reach a plateau of culture and power, and then decline into decadence and are overrrun by barbarians/rot from within.
These days, we tend to somewhat schizophenically believe both simultaneously - that a technological singularity is possible, while at the same time worrying about the collapse and decadence of world civilization, through environmental factors particularly.
I think the real game changer is going to be the brain-computer interface. Or more specifically, cybernetic implants. I fully believe this will be the next era in our evolution.
Once cybernetic implants become a reality (memory enhancements, hawk like vision or perhaps a head’s-up display right on the cornea, etc) the people that do get them will be able to outperform others to such a degree that jobs will go to the people with the implants thus generating societal pressure for others to get the implants to compete.
Thus we will start on the path from being fully (mainly in some cases already) biological to full machine with a human consciousness controlling it.
I once heard somebody describe history as being like a coil. We are going around in circles but everytime we get back to the same point of the circle we’re actually higher then we were the last time we were at that point.
Seems reasonable enough. The notion of a ‘singularity’ seems to be that, at some point, the high point of the coil is so high that the process of cycling - stops. No more dark ages, no more barbarians …
This I’m not sure I believe is likely to happen, ever. Though naturally I’d be delighted to be proven wrong.
Only from people working in the areas that succeeded. I have an example. In 1978 or so my Ph.D. adviser organized a conference on computing in the 1980s, which also became a special issue of IEEE Computer. He recruited a bunch of big names for it. I looked through the issue a while ago for a column, and it was very interesting. Remember, personal hobbyist computers were just beginning, and the IBM PC was still in the future. Portia Isaacson and Adam Obsborne (who was still a publisher) did a goo job on mobile computing. A lot of the participants were from IBM, and they predicted mainframes as far as the eye could see. No one predicted anything like the Web, though ARPANet was up and running, and PLATO had implemented many of the social applications (like text messaging and MUDs) which we have today.
Someone will get the future right, but we have no way of knowing who until we get there.
Society, by definition, consists of interactions which its members do understand. I have yet to see a description of what this new person will look like. I can buy instant access to more memory, I can even buy co-processor in the brain to let us do math. I don’t buy that this will fundamentally change our thought processes.
Now, technology growing beyond our understanding I accept, and in fact I think it has already happened. I know computers from the ground up, and I wouldn’t claim to have a full understanding of what is going on in my laptop. Someone who understands the OS has no understanding of the microarchitecture of the processor and ASICs on the motherboard. We model technology in ways that make sense to us, but we don’t really understand it - not individually.
I don’t know what’ll happen anymore. I used to be on the singularity bus, expecting things to happen around 2040 but I really don’t know what I think anymore.
The neuroscience for general intelligence has recently been discovered. So the concept of creating software that can mimic creativity, problem solving, etc. that humans have is closer now.
Of course having an understanding of general intelligence doesn’t mean you know how to mimic it, or that you know how to improve it. But we will sooner or later.
On a long enough timeline, I think it is impossible not to have an event like a singularity where cognitive abilities bootstrap. Its like human mobility. We are limited to our biological mobility of walking at about 3mph and running at about 8mph for very short distances. But now we have cars, boats, planes that can travel in terrain we can’t at speeds with stamina we can’t muster. So the same thing should happen with our cognitive abilities some day, where even the best we can do unaided become pathetic compared to what aided cognition can do the same way technologically aided mobility blows even the best human mobility out of the water. But I have no idea what the timeline for that is.
As a software engineer I believe that something like the singularity will happen. On the other hand, in ~1900 physicists thought we were near to understanding everything and in the 1920’s mathematicians were attempting to put math on an unassailable foundation. Then Quantum Mechanics and the Incompleteness Theorem happened. Part of me feels like we’ll run into a similar barrier with information and technology.
I was just thinking the other day that, with a few refinements, stuff that we know how to do today is getting very close to some form of singularity.
Have you seen the army’s exoskeleton? What about the brain/computer interface that lets you control a computer with thoughts? And everybody has seen the Segway. Put those technologies together, again with a little refinement to the respective systems and you have quadriplegics rollerblading (or just going shopping). I think we could probably do that with the same batteries that power a Segway. Around the house some form of wireless electricity could probably make it so that it could be used for hours at a time.
Did you like walkie talkies when you were a kid? One person upstairs in the bedroom and one in the backyard and as long as you kept line of sight you could talk to each other at a distance. Well with a few tweaks you put a telepathy device, a blue tooth audio implant, and a cell phone together and two people could “think” at each other from opposite sides of the world. With todays technology.
What I see we are lacking right now is an effective input device to the brain. vision and hearing are great but they rely on external hardware to first create an effect in meat space that can be received by the sense organs. If we can come up with direct input to go with the direct output some version of “singularity” would be a few years to a decade away. At that point you could just close your eyes and enter into an alternate reality like Second Life but with full sensory input.
I don’t know how far away that is but 10 years ago I never would have guessed that we would be working the bugs out of technologically based telepathy by this time.
Not to rain on your parade but they did not, in any way shape or form, discover the “neuroscience” for general intelligence.
They did the equivalent of touching the hood of a car that is running, noticing it’s warmer than the trunk and figured correctly that something under the hood is related to the ability of the car to move forward.
Thank you for raining on my parade. However that experiment did validate the P-FIT theory, so the brain areas and communication between them are an area to look into for further development. And it is only a matter of time before we figure out how to increase or artificially mimic human intelligence.
I think we have various brain interfaces like an artificial hippocampus, cochlear implants, etc. So the concept of using technology to increase communication between brain areas might not be ‘that’ far off.
I agree we will get there. But we are a loooooooooooooooong way from it.
Copying the structure of something is far from understanding how it works. I think we will have artificial intelligences running on brain simulators long before we construct one from scratch. We can give this kind of AI more memory, and more speed, but it won’t be fundamentally different from human intelligence, and will not create a singularity.
I took AI 40 years ago, and was told then that we’d have it real soon now. Most of the applications which were vague research topics then you use every day, but I don’t think we’ve gotten any closer to AI.
Waitaminnit – are we?! I thought Brain-Computer Interface was still way short of putting information into your brain, or reading information from your brain beyond a very gross level.