Video calls can be done on some mobile phones. Sure it might not be the video phone imagined in the past, but in reality it’s even more advanced, a video phone you can carry around in your pocket!
The singularity sounds more like fiction than anything else. Right now, whats holding back AI is our ability to code it, not a lack of processing power. People like Vinge are in the business of selling you fiction and people like Kurzweil are pill-addicted nutters.
If you look at where the technological industries are going you can see a move towards automation and robotics. Youre seeing more internet based technologies. Youre seeing more cheap disposable eletronics. Youre seeing more “green” technologies. (funny how the last two conflict)
You’re not seeing lifespan extensions nor are you seeing Hollywood-style AI like your heroes claim are just over the horizon. The future is always more mundane than our fiction writers and futurists have envisioned.
Lets flip your question on its end and see what a futurist for the 50s or 60s would have thought today would be like. Lets pick Buckminster Fuller. He assumed all sorts of zany things would come to pass like floating cities, domed cities, three wheeled cars, permanent space stations, etc. None of these things have come to pass and almost all futurists have missed out on networking technologies.
Not to mention the social aspect of technology. We’d be farther along with stem cells if Al Gore was voted president. Some tech may never come about because of social and political aspects that are impossible to predict.
Simply put, youre probably not able to make any accurate specific predictions for 40 years in the future. For all we know the next 40 years will be a barren post nuclear war wasteland.
Ditto what flodnak and HorseloverFat said.
I want my flying car! or at least a jetpack!
I think what great technological innovations had going for them 100 years ago that they don’t have now is a person’s name to go along with them. Of all the revolutionary technologies in the past one hundred years, few are associated with a single person. Because of this, it may seem that the pace of technology is slowing when it is probably speeding up. I do a little amateur blacksmithing, and one thing I have learned from that is that the more tools you make, the easier it becomes to make new tools.
The modern world lacks the single inventor for a couple of reasons:
-
The complexity of things means lots of capital and lots of labor. Its rare for a single person to make a big change. Things tend to be incremental. Complex things tend to be done by teams.
-
The lone inventor was a myth even in its heyday. There are no shortage of engineers who never got credit because people like Tesla and Edison were such shameless self-promoters. Granted, capitalism at the time pretty much demanded a shameless self-promoter so the company would be successful, but at the end of the day a lot of the work was divided among teams.
Har har. Yeah, not exactly GQ caliber I admit.
Uhh, that’s not Moore’s law. Moore’s law (here’s the obligatory Wikipedia entry) states that the number of transistors in a chip doubles roughly every eighteen months (the Wikipedia entry says every to years, although that’s not the way I remember it), not that the speed of those transistors will increase. In fact, the speed of microprocessor chips really hasn’t increased much lately, even though Moore’s law has pretty much stayed intact. What we’ve beeen seeing lately is things like dual-core and quad-core processors, which can use those extra transistors.
Here’s a look at two companies’ prophetic visions:
Note that the exponential growth predicted by Kurzweil (and others) applies only to information technologies, and not to all technology. Examples he brought up are brain scan spatial resolution, DNA sequencing, processor clock speed (duh), and so on. That is an important distinction.
I recently watched this presentation by him at the Singularity Summit. The video is in three parts. The other parts can be found in the panels to the right.
In the presentation he claims that by 2018 (if my memory serves) computers will be sufficiently powerful to simulate all regions of the human brain. Now, I haven’t studied this closely, but he claims that several brain areas already have functioning computer simulations. And that is using the current transistor paradigm, while there are upcoming paradigms (molecular computing?) that even have functional products due to hit the market which can continue the exponential growth after the transistor technology hits a wall in the coming decades.
Of course, physically there is a limit, but there seems to be a long way to go before that happens.
There’s a big gap in this reasoning. Suppose we do manage to create a robot that’s smarter than a human. It’ll then have taken humans about a hundred thousand years to create something superior to us. But the robot’s better than us, so maybe it only takes fifty thousand years to make something better than it. That’s not such a big deal.
Looking at it another way, you could also argue that the singularity came and passed a long time ago, and nobody noticed. All modern computer processors are themselves designed on computers, and couldn’t be designed otherwise. At some point in the past, humans built a computer so good that it could be used to build computers better than humans alone could build. That did lead to progressively better and better computers, but it did not lead to any sort of world-ending event.
I don’t think anybody is speaking of a world-ending event, just that we will pass the “knee” of the exponential curve and see faster improvements, beyond which it will be hard to know what things will be like. I haven’t read any books by these people, but that’s the impression I get from watching a few of their presentations.
I have before me a copy of FUTURAMA, a booklet to accompany GM’s exhibition at the 1939-1940 World’s Fair.
Here are some highlights from the WONDROUS WORLD OF THE DISTANT FUTURE … 1960!
Videophones are a great example. By the time 2001 came out they were old hat - I used one at the NY World’s Fair in 1964, and we gave 2 away as a prize when I was working for Bell Labs in 1993. We were also doing a lot of video conference calls then.
The infrastructure for phones is going to come for free as everyone moves to VoIP. The question is: who wants them? We used to do video calls to our daughter in Germany over Skype, but we’ve switched to voice only except for when there is an actual reason to use video. Many of the conference rooms at work have video conferencing, but no one ever uses them. What people do use is something like WebEx where the important visual information (not someones face) gets transmitted.
Video calls is a nice technology to have for free. I think any advance is going to have to be feasible and wanted also.
Yeah, but thats almost a meaningless statement. Right now we can virtualize a mouse’s brain (or parts of on different supercomputers) in not exactly real-time, but it can be done. Essentially its just a model of neurons in a particular structure. They fire, nothing really happens. Its not a mouse. This is like getting my brain, putting it through a blender, and calling it a human brain. That’s technically correct, but its not a brain. We have the parts but not the “song.”
AI is a software problem, not a hardware problem. Mass virtualization of neurons isnt going to bring about some Hollywood style AI. If this is what the “singularity” is going to be run on, then its more of a fantasy than I thought.
Or maybe instead we will have changed the language so that spelling is simple (consistent with pronunciation) like most other languages than English?
But that’s not a technology problem, but one of human inertia. Those are probably much harder to change. (Look at the slow process of changing the USA to use the metric system.)
I’m one of the pessimists. I believe that the rate of technological progress we’ve seen in the last 200 years or so was made possible by a set of favorable circumstances we’re not likely to see again: low population, un-tapped mineral resources, cheap energy. Society has had the wealth to afford the luxuries of research and development and frivolous consumer goods. In the next 40 years energy will get much more expensive. Food will be more expensive to produce and harder to transport while the growing population will increase demands for it. The standard of living will drop. People will be struggling to get by and won’t be able to afford advanced luxury goods. Governments will no longer be able to support much science and technology research and neither will industry.
But assuming we avoid a big crash before 2040, here’s what we’ll see…
Smaller cars, more scooters and bicycles, more electric and bio-fuel, but no hydrogen fuel-cell vehicles. The air will be dirtier because we won’t be able to afford catalytic converters or afford to enforce emission standards.
Diminished air travel, no longer practical for the average person. A return of rail travel, but slow - not bullet trains.
No space program except for some satellites and maybe an occasional planet probe from China.
We’ll have faster computers and but they will get more expensive and fewer people will be able to afford them. Software will be somewhat better but nothing close to AI.
We’ll start building nuclear power plants again. Where will be more solar and wind power generated but it will still not be able to give us the cheap electricity we enjoy now.
Medicine will be more basic. We’ll still know how to use high-tech treatments but few will be able to afford them. People will die younger.
Bio-engineering is a big “if”. Biology research is relatively cheap so it may continue, and the potential of engineering organisms to make our lives better is great. But right now we don’t know how to “spot-weld” DNA in living organisms and we don’t understand the gene-to-trait process well enough to be able to design organisms. There are no laws-of-physics barriers to bio-engineering, but the problems may turn out to be extremely difficult to solve. So I don’t know.
Yeah but in those one hundred thousand years humans had to start from nothing. The robot will be starting where we left off, and we just created the robot. It’d be like if God created us and then left us the blue-prints and the materials he used and said, “go on, have a go at it yourselves.”
Except that the population growth is positively correlated with technological innovation. So the lower the population the less technological advancement.
Mineral resources are becoming more abundant, not less
Energy is becoming cheaper, not more expensive.
It’s quite impossible to imagine how.
Food is cheaper, more abundant and more available to more people than ever before in history.
The population will cease growing within 50 years. So all these things that are currently improving that need to get worse for your predictions to come true
are gong to have start worsening very, very fast.
Compared to what?
As soon as technology became anywhere near as advanced as “the singularity”, some wackjob would use it to destroy mankind (or at least blast us way back down the technological ladder).
That’s a given, and I can’t see how anybody can disagree with me, human (and religious) nature being what it is.
Ergo, the singularity will never happen, and we’d better hope that it doesn’t get too close within our lifetimes.
My, aren’t you the optimist.