Evidence of the Technological Singularity

Ok, let’s assume you have to convince somebody that the technological sigularity is going to happen in the next 30 years, using technological/social advances that have happened over the last 30 years as evidence that the rate of technological change is picking up.

The singularity implies that the rate of change at some point will be so fast that mere human minds won’t be able to keep up with it. Companies now make 5 year plans, in the future they will make 5 week plans, in a true singularity a 5 day plan might be considered “out there.” Actually, a comparison like this is misleading … if the technological singularity is anything, it’s unknowable.

I’m not interested in debate about whether or not the Singularity will or will not occur here. I think it’s possible but unlikely based on what we know at present, but that’s beside the point, and besides, most threads about the singularity seem to become debates about whether it’s possible or not, and I don’t want to retread that tire. Basically I was thinking it would be fun to talk about current gee-whiz technology that seems to be evidence of technological leaps going faster than expected. Arguing that a specific leap is not a game-changer or a leap is fine, I’m not looking for just rah-rah stuff.

I’ll list a few examples off the top of my head:

Smart phone technology: the ability to contact anyone, anywhere, and to make videos and photos, play video games, access the Internet, read most books in inexistence, embarrass your friends and family on a global scale … all in a device the size of a credit card. Fuck yeah, that’s singularity stuff.

Google glasses – what was that clunky credit card sized thing all about, anyway?

Google contact lenses with built in cameras – OK, this is getting ridiculous … but it’s real.

Feel free to add your own evidence. Or dispute mine. Or others. Starting … now. Hurry before the singularity gets here!

Funny, I came here to start a thread entitled, “Will we have invented all of our technology within the next 1,000 years?”–and I see this thread.

I think it’s pretty much assumed that strong AI (i.e., conscious, autonomously thinking machines or something pretty close to that) is necessary for the Singularity to happen. The whole point is the acceleration of change, and presumably machine minds would take over and begin innovating at a pace far beyond the capability of humans.

The acceleration of other types of technology (i.e., not contributing to the goal of strong AI), therefore, is presumably not all that relevant as “evidence of the technological Singularity.”

In terms of developing strong AI, I think it’s fair to say that we are making very little progress.

But let’s talk about other technology too in the spirit of your post. Personally, I think we are living in a time of rather plodding technological advancement. The leap in the personal computing experience from 1980 to 1990 was huge, a game-changer. The leap from 1990 to 2004, with the WWW coming on the scene, was even bigger. Plus, better graphics, wifi, and so on–lots of stuff we couldn’t do without today. But is the difference between 2004 and 2014 all that great? Yes, there are applications that people would not want to do without, such as Facebook and streaming video, and there will be many more to come, but the experience attributable to hardware is not all that different.

Cell phones, tablets, and smartphones, the same thing in many respects.

Medicine. Back in the 70s, my mom used to talk as though we’d be immortal around this time, what with the advances she had seen herself. We are a long, long way from eliminating cancer and heart disease.

“Plodding” does not mean “negligible.” For example, in our lifetimes, we are going to see pretty much all cars become electric, fuel cell vehicles, or even something more advanced. Maybe we’ll knock out a major disease in the next 30 years. Maybe, just maybe, we’ll see another game changer like clean, cheap fusion or something we can’t even imagine now.

But no–I don’t see evidence of wildly accelerating technological progress. I actually think the big, big changes we’ll see in the next two decades are social and economic, not technological.

Self-driving cars: While fully autonomous cars are probably still at least a decade off, tons of cars on the road already have partial autonomy, and it’s only going to increase from there. The technology is cheap–it may even be a net savings when all cars have drive-by-wire. The AI needs to improve a bit but it’s getting there.

Robots that move around on legs: See BigDog and others. Fact is, robots can’t really navigate most human spaces properly without legs (instead of wheels). Cheap sensors (accelerometers, gyros, etc.) and improving computing power are driving this.

Batteries: These are a slow-moving tech compared to anything computing-related, but even here we’re seeing massive improvements, both in energy/power density and in form factors. The improvements are driving all sorts of new tech, like quadcopters that are viable for commercial applications.

Speaking of which–drones: Quadcopters and other drones are going to be all over the place. Again, cheap sensors and computing power, as well as improved batteries, make them possible. Won’t be long before they’re buzzing around almost continuously.

The question does not make sense. Pretty much by definition, new stuff that comes along post-singularity will not be stuff that present-day humans can understand or even, mostly, see the point of. Even assuming that post-humans continue to be part of the “progress”, and the machines don’t just kill us, or start to ignore us (treat us much like we treat ants, for instance), or enslave us, or put us in the Matrix, or whatever, post-human experience will be beyond present human understanding.

You can count me among those who think nothing like this is ever going to happen. The concept of a “singularity” is based on a whole slew of very questionable assumptions, notably, for one thing, assumptions about the nature of intelligence.* However, if we follow the OP and grant that it can or will happen, it is built into the concept that we, as we are now, cannot understand much at all about what it would be like.

¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬
*In my view, genuinely intelligent, autonomous, and even conscious machines are probably possible in principle, but it is not likely to be possible for their intelligence to ever be very significantly greater than that of the most intelligent humans. Even if it is possible in principle to have radically more intelligent artificial beings, it is unlikely that either humans or human-level machine intelligences could understand the radically new principles of cognition that would be needed in order to design them. It is a mistake to think of intelligence as a continuous function that increases steadily between rocks and humans, and could potentially increase to infinity. Its evolutionary increase has been more like a series of phase transitions as whole new types of cognition have developed via random evolutionary tinkering. We now know of maybe three phases - rocks, dumb animals, and humans - with some amount of variation in intelligence level within the second two that is nevertheless dwarfed into insignificance by the differences between the phases themselves. (I am prepared to concede that my category of “dumb animals” may in fact comprise several intelligence phases. There may, but also may not, be differences in type between jellyfish intelligence and fly intelligence, for example, or even between mouse and monkey. That is an empirical question, but one which science has scarcely yet started to address.) There is no particular reason to think that “higher” phases are possible (any more than we expect there to be an infinite number of possible states of matter), and every reason to think that even if they are possible, we humans will not be able to understand even the next phase in such a way that would enable us to build something that embodies it. If it happens (a big if), it will happen once again because of the random tinkering of evolutionary processes, not because of either human or machine instigated technological advance.

Yes and no. We’re making amazing strides from the bottom up. Dr Strangelove lists a few of many. Others include Siri and half of the stuff that Google does. Right, that stuff isn’t strong AI by a long shot, but I won’t be too surprised if we hit on strong AI without knowing quite how we did it. After all, that’s how evolution did it.

But yeah, I think we’re still a long way off.

But that grapheme stuff, is that sci-fi come to life or what? If it pans out, that is. But I won’t be surprised if it’s even more revolutionary than plastic.

However, the strong AI is indeed the key component, and while stuff is getting way smarter in odd ways (mostly “low level” smarts, not what we normally call “intelligence”) I’ll be very surprised if we can pass the Turning test within a decade.

(BTW, I have a prediction. My guess is that the first system that passes the test will be something that most people will agree isn’t really what we’d call intelligence, but rather, a very slick version of Weizenbaum 's Eliza. For example, the child of Elizaand Watson. For those who don’t know, Eliza was a relatively simple linguistic analysis program capable of giving human-sounding responses, usually imitating a psychologist: “You mentioned your mother. Tell me more about your mother.”)

When we can read most books that don’t even exist, that will be damn singular technology!

:smiley:

The original graph that started the whole Singularity thing was not contingent on a strong AI. Furthermore, I do not think a strong AI is necessary. The Internet has accelerated the rate of technological change because with it scientists and engineers the world over can communicate much more readily. The rate of change would probably be a lot faster if it weren’t for nationalism … every nation wants to keep the edgy stuff that’s related to advances in weapons (and god knows what else) to themselves, and also spy on the other nations (hello, NSA!) to find out what they’re up to, and keep THEIR stuff secret too. Who knows what kind of goodies are being produced in our weapons labs right now? Remember, the Internet itself was the product of a Defense Department project.

The reason I don’t think a strong AI is necessary is that computers hooked up to the Internet are not the limit to how well the human-computer interface works. I use news agents to get information on topics I’m interested in, and I’m sure scientists do too. As the news agents become more and more sophisticated, and the human-computer interface does, too, it will be more and more like, you ask a question and the computer serves up the answer on a platter. So a materials scientist might be sitting there thinking about ways to create superconducting ceramics that work at room temperature, and wondering if there might be a nice molecule that would help the various elements of the ceramic to align properly, and the computer would be serving them up, with projections of how they behave at certain temperatures, etc., instead of the researcher having to break off dig out the info him or herself, and what would have taken much longer and maybe never happened, happens.

That’s a matter of small incremental improvements, but the cumulative is an effective leap in human intelligence. Doesn’t require any kind of AI at all, just better and better interfacing. In fact, the advances that have occurred in prosthetics, with human nerves able to control and respond to electronic devices, and increased ability to monitor brain states, points toward a possibility that there could be a real DNI (direct neural interface) between human and computer. That could be a game changer too. Hell, the Oculus Riftheadset could be game changer for scientific innovation if being in a more immersive environment makes conceptualizing easier.

The thing, is, nothing ever changes … until it does.

Basec on personal experience, I disagree. Second Life blew my mind. And many others’. It really is another world.

Modern smart phones are waaaay past the old cell phones. Light years. Once again, I disagree.

I hate to be so disagreeable (well, not really) but once again, I disagree. I am about to go in for a routine procedure (colonoscopy) that killed my grandfather back in the day (it was flubbed, he died of septicemia as a direct result of the procedure, rather than the colon cancer that initiated it). It’s now such a routine procedure that it’s not even done in a hospital any more, or requiring time spent in a hospital. It’s an outpatient procedure. Most of my adult relatives have already had one. Heart disease, many cancers and of course AIDS are now treatable, not a fast ticket to the morgue.

So, advances are being made, and will be made. Hell we can clone from somatic cells rather than just embryonic cells, that’s HUGE. You want to call it plodding, sure, but I suspect that’s because most of the low-hanging fruit as far as easily treatable illnesses have already been harvested, we’re dealing with the really intractable stuff now.

A fair estimate. But once again, nothing ever changes … until it does.

I, like Evil Captor, want to disagree that strong AI is necessary to create a technological singularity if such a thing is even possible. I would than having a direct computer/ human brain interface with massive parallel networking and good search / organization /communication algorithms could do much the same thing. 1,000,000,000 brains hooked through computers in a high speed self organizing network could do amazing things.

AI which can understand written language and answer questions. Watson on Jeopardy was impressive. Granted, we aren’t anywhere near a singularity right now because of it, but in Kurzweil’s book he talks about how machines which can read and comprehend written language (to build their knowledge base), comprehend questions and offer solutions would be a big step in bringing about the singularity.

I don’t think the OP is providing examples of the “technology singularity” AKA “transcendence” or “nerd rapture”.

Smart phones w cameras -> Google Glass -> Google Contact Lenses are just iterations of the same digital camera and web technology. They are predictable and easy to comprehend.

By definition, the technology singularity is when technology advances to a point where we humans can no longer comprehend it. Typically requiring AI that is smarter than the human brain, or perhaps the ability to link human brains into computers and each other.

For example, Google contact lenses will be an interesting development, but as I said, it’s all basically recording and sharing videos. Hardly a post singularity event that will fundamentally change humanity as we know it.

OTOH, the ability to record, transfer, copy and erase human memories to and from a human brain or computer is something that would fundamentally change the very nature of humanity. What does it mean to be “you” if someone can copy your brain into another person (or robot body), add, replace and combine memories like one were editing YouTube videos?

A lower-level singularity could be possible without “conscious” machines. We already have non-conscious functional “expert routines” that do things like voice recognition.

Possible “evidence” for the singularity might be computer designed microchips. Human engineers are no longer capable of drawing up the schematics for processor chips. Humans no longer build machines: we build machines that build parts for other machines…sometimes to the ninth generation.

Smart phones, by themselves, aren’t evidence. The fact that smart phones went from unheard of to ubiquitous in ten years – that’s evidence.

We are also hitting big walls as well. Voice recognition is a good example:

http://blog.spoken.com/2010/07/voice-recognition-is-close-enough-good-enough-for-your-customers.html

Here is Robert Fortner’s original article, in which he starts off by saying, “The accuracy of computer speech recognition flat-lined in 2001, before reaching human levels. The funding plug was pulled, but no funeral, no text-to-speech eulogy followed.”

http://fursman.com/activities/news/industry/74-rest-in-peas-the-unrecognized-death-of-speech-recognition

Yes, it’s possible to speculate that strong AI will just eventually happen. But are we doing what we need to be doing now to make it happen? I would say no.

The argument of Kurzweil and his acolytes has been pretty darn primitive: According to Moore’s law, we’ll soon have infinite processing power. So we’ll just model the human brain in silicon–and then we’ll have a human brain, only faster and better! (Or we’ll build a neural net, or something. The argument comes down to saying we’ll find a way around the fact that we simply don’t know how to write strong AI software–at all.)

And now Moore’s Law is about to end anyway as well, thanks to electron tunneling.

I don’t actually think the Turing test is a very good test, since an AI that is better than a human in a way that you’d expect an AI to be would be instantly outed as an AI. For example, if I type in, “Give me pi to 100 digits,” and then the answer instantly appears in the chat window without there having been time for a human to type it or even cut and paste it, then you know you’re dealing with an AI, and it has failed the test. Even if you have the AI fake human-level capabilities, humans are very, very good at finding subtle errors that could out the AI, even if it were true strong AI.

How will we know when we have universal strong AI? It’s not that hard to imagine. I’m a translator, and it’s just a fact that translation software will not be able to replace a human translator until strong AI is realized. I don’t use translation software at all, since it can’t even help me. (Translation software is great when you don’t know a language at all and just need to know the general meaning.) Strong AI would be able to translate any document perfectly, imitating any writing style and even appending comments that deal with errors and inconsistencies in the original document.

Or let’s say you wanted to build a building. You just say to the AI, this is generally what I want to do: give me some plans. And it would spit out a brilliant plan in a nano-second. If you wanted to make changes, it could alter the entire plan in a nano-second–or argue back why the changes are undesirable.

Meanwhile, you’d have fully automated, strong-AI based physics and chemistry labs working night and day to expand technology itself.

That’s what strong AI could do, and that’s why, if it existed, we’d encounter the Technological Singularity. In a sense, it would be like a massive intelligence “infecting” the entire planet, and the results would be entirely unpredictable. I don’t think it would be a good thing for human beings.

People in this thread are suggesting we could have a singularity without strong AI of this type. I think they are using the word in a different sense and talking about a different kind of future. That could be an interesting discussion in its own right, but I don’t think it’s the same thing.

I agree, but I remember from Kurzweil’s The Age of Spiritual Machines (which I read in 1999–basically none of his predictions have come true), he had a hockey stick graph where, basically, once strong AI is created you get almost infinite acceleration of technological progress. That’s the whole point of using the word “Singularity”–a near-infinite rate of change.

If your thesis is that we are making technological progress–heck yeah, I agree with that. I think we are seeing a rate of change that is a lot less than 1850-1900, or 1900-1950. I think technological advancement has been relatively slow since 1950. Cell phones, the Internet, home computing, a few important medical advances. Yet these are not as big in my opinion as, say, the telegraph, telephone, automobile, airplane, and antibiotics, nor do they accelerate change as much. I think the Internet is a huge accelerator of social change–because it gives the individual the power to learn from and broadcast to the entire world–but not really of technological change (though yes, it has some influence).

Sure it requires AI. Google is a bunch of AI. It’s not strong AI, but it’s AI.

I don’t think your example is really a good one. Materials scientists who have been in the biz awhile probably already know best practices; and when they don’t, simply talking to other colleagues would be the quickest way to get a question answered. The advance you are talking about sounds like a search engine. Well, they probably already have access to a journal article database. I’m sure that could be improved and made more comprehensive, but it’s current technology. Unless you want the system you are proposing to add value in the form of thought (which would be strong AI), I’m not sure what kind of improvement you are suggesting is possible.

I agree that at any time a new technology could come along that could totally change life as we know it. For instance, teleportation. Imagine current transportation ceasing to exist, and I could go have lunch in London and be back in my house in five minutes. It would irrevocably change society overnight. But even that big an invention would not accelerate the pace of technological change beyond greasing the wheels of communication and distribution. It would not create the hockey stick–whereas strong AI would (in theory).

But that’s not what the Singularity is about.

[quoteModern smart phones are waaaay past the old cell phones. Light years. Once again, I disagree.[/quote]
I didn’t make my point clearly at all. What I meant to say was that there was a huge burst of progress when Apple put out the iPhone in 2007, and there have been some nice improvements to smartphones since then, but… they are small computers with touch screen interfaces. They are not really going to be able to do stuff beyond what a good laptop can do. Are smart phones going to be massively more advanced in 2024? In terms of hardware, I doubt it.

I work as a medical interpreter part of the time, so I’m aware of many of the advances that have been made in my lifetime. Heck, open heart surgery was a new thing in the 70s.

The flip side of that is that a lot of the advances keep you alive without curing you. I saw my dad die a horrific death by heart disease because he was “saved” (after having had open heart, numerous angioplasties, etc.) when his blood pressure dropped to like 64 over something, and he made it to the hospital just in time. A lot of people are “saved” but then die slow deaths or suffer from low QoL for the rest of their lives.

Whereas with something like inventing antibiotics, you just cured millions of people who will now be just fine. There have been some good things in my lifetime like that, like the chicken pox and HPV and other new vaccines. But, contrary to what were common expectations not too long ago, we haven’t nuked a major killer in awhile. (I think medical technology progress was always pretty incremental, however. It just took us until the 19th and 20th centuries to be ready to pluck some pretty low-hanging fruit. My guess is that disease of all types and even aging will be completely eliminated in the next couple hundred years…)

We already have that network: it’s called the Internet, and a lot more than 1 million people are participating. Now, if you are suggesting a qualitatively different way of connecting those brains or manipulating their output, then how is that really different than developing strong AI?

The bottleneck in strong AI currently is not processing power. We don’t have have a strong AI program, and we don’t know how to write one. Only if we knew how to write one would we know what kind of processing power we would need. (Yes, processing power could be a bottleneck, but no one is waving around a strong AI program and saying, “If only I could get a billion teraflops, this would be feasible right now!”)

“Lower-level singularity” is a contradiction in terms. “Singularity” means something specific, not just “a really high level of technology.”

I worked in the semiconductor industry for a few years and got a direct impression of how things work. What you say is true, but it’s actually an argument for the slowing of the rate of change.

You’re right, it’s really, really hard to create a new chip now, especially a new chip that is much more advanced than the previous generations. And with each new generation, it only gets harder. It’s not just the design. I worked for a company that creates machines that cut chip wafers into chips (and made grinders and polishers for the wafers, etc.). That’s one of the cruder steps in chip creation, and it alone was pretty darn complicated. But you have many different companies each supplying a very high-tech process to make an Intel chip possible.

In a world with strong AI, the AI would design the next-generation chip in a few seconds, robots would build all the factories, etc. And it would just massively accelerate. We humans are truly at the point where we are creaking along trying to make the latest and greatest semiconductors. Creating a new-generation chip does not make it easier to design the following generation, since processing power isn’t the bottleneck.

Smart phones have brought about a big social change, but they have not accelerated the pace of technological development beyond greasing the wheels of communication just a bit (scientist can pull out cell phone more conveniently, look up articles online while waiting in the doctor’s office, etc.).

Not at all. The development of FTL would be a higher-level singularity than the development of anti-gravity. Both would change our existence in ways that can’t be predicted or modeled. One is just more profound than the other.

Debated. I think that the word is not perfectly defined, nor meaningfully definable beyond certain blurry boundaries. I posit there can be more than one kind of singularity, with relative levels of effect. I hold the word to mean a scientific, technological, engineering, or even social revolution that has effects beyond anyone’s ability to predict and which promises further changes, leading to even more unpredictable results.

The development of fusion power will be a low-level singularity. A Star Trek replicator would be also. A remote-viewing technology, allowing an operator to observe any place on earth, would be one. These would change nearly everything, and would “keep on giving,” making more and more changes in our lives.

(Another kind of singularity is an utter collapse, as in the Gray Goo model – the Stargate variety of replicator!)

In my opinion, the Industrial Revolution was a singularity, and the innovation of Agriculture was also: the first, in fact.

Is it? I’m in favor of accelerating the rate of change. I consider the most realistic singularity models to be beneficial.

I don’t see that this necessarily follows. It might take a few weeks, or even a few years. You seem to be inferring specific parametric data without justifying it.

Lower-level non-conscious AI might still perform incredibly revolutionary services for us. For instance, context-sensitive common-language parsing systems – the kind that can tell whether you mean “C” or “Sea” – could do library searches for us. Imagine a context-sensitive Google! It wouldn’t be conscious…but it would certainly change the game.

Agreed: I didn’t say that smart phones constitute a singularity, only that they are a kind of evidence that some forms of singularity might be possible.

Well, let’s start here, as I basically agree! Both are cases of technology accelerating the pace of technological change.

I agree also that the term “Singularity” does not have a definition that everyone agrees on. I am trying to capture the essence of therm as Kurzweil et al. use it, and I think the core concept is technology that massively (to the point of infinitely) increases the rate of technological changes, resulting in the characteristic hockey stick graph.

I also agree that any number of inventions could radically change life as we know it overnight, and probably one or more will come along in the next 50 years. Those inventions will not necessarily quicken the pace of technological change and could even slow it. An example off the top of my head: An electronic device you put on your head that gives you a high with 10x the pleasure of heroin with no side effects. Such a device could create a nation of people tuning out, but it would probably results in a slower rate change as people became satisfied with just doing nothing.

Sorry, again I was unclear. I meant that your description of changes in the semiconductor industry was actually evidence that the rate of change is slowing in that industry and not accelerating.

I’m not saying that that is my personal belief but am merely reflecting the essence of what Kurzweil et al. say.

I think that lower-level non-conscious AI already has already changed the game, so I there’s no argument there.

I think the key step, the point where the Singularity really happens, is when machine intelligence is able to take over its own development program. I don’t think consciousness is even a necessary condition, but autonomy is. I think that such a technology would actually be too dangerous to implement.

I’m not sure what kind of evidence they provide other than that technological progress is possible, which we already knew.

It’s getting there already. The first time I was truly impressed with voice recognition was when going a voice navigation search on Google Maps. I asked for a “Thai restaurant” and it actually came up with “Thai” and not “tie”. This is an easy case but I suspect one could go pretty far just looking at local context.

More interesting would be if Google could keep a simple model of my mental state. If they can learn (from past interaction) that I’m more interested in programming languages than the ocean, they can better infer “C” vs. “sea”.

This is no fun as a debate: we seem to agree, by and large! :wink:

My only thought here is that some very small implementations of self-modifying coding might be so efficient and cost-effective that corporations will explore them. As each such innovation turns out to be highly profitable, there will be an incentive to deploy more of them.

It’s search algorithms at first, but soon, self-driving cars… It’s a slippery slope effect, where each inch we slide seems to make our lives better. Before long, it’s been incorporated into our economy, and we not only can’t back out again, we don’t even really want to.

What gives me most comfort is the belief (call it faith) that the AI system would soon come to realize that what is good for us is also what’s good for it. A variant of Asimov’s First Law would evolve, out of mutual benefit.

Sure, Skynet could launch all the nuclear missiles – but war would degrade its own internal communications and data storage as catastrophically as it degrades ours. Do we burn down the house when the cat makes a mess near the sandbox?

Have you ever read John T. Sladek’s hilarious farce, “The Reproductive System,” (original title “Mechasm”)? It’s all about a runaway AI, and skirts the “Gray Goo” scenario.

Not sure why Kurzweil keeps getting credited with being the author of the singularity. Vernor Vinge wrote about it nearly a decade earlier in his novel Marooned In Realtime (utterly brilliant BTW) and he was referring to a 50s theory by mathematician John Von Neumann who said the Singularity was: “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”. Vinge saw these as an apex of complex technologies that interfaced with the human mind, but ultimately skipped over the moment itself, instead showing how people who left the period (i cant explain what leaving means without describing large parts of the novel) shortly before the singularity as being exponentially more sophisticated than earlier leavers.

However nobody seems to describe what is driving this acceleration.