Microchip clock speeds, data storage and retrieval speeds and volume, and mathematical (algorithmic) sophistication.
The first two can’t be counted on to continue. But the third…wow! Some fascinating discoveries have been made in abstract knowledge. People have come up with approximations to solutions of the “traveling salesman” and “packing boxes” problems.
(There is commercial software available for event organizing – how to fit 90 people into 24 seminars on 3 different days, at x, y, and z hours – with no overlaps. This is a variant on the “packing boxes” problem, and, while it is formally not solved, and an abstract solution for general cases would take millions of years of computation, these programs do a sterling job of “roughing out” a solution. This would have been impossible, even 15 years ago.)
The “singularity” isn’t a singularity at all, but a horizon, and we are constantly approaching it, and always the same distance away. The world of today is beyond the horizon of people from 30 years ago. I can’t tell you what life will be like three decades hence, but I can tell you that once we get there, it’ll all seem perfectly normal.
Perhaps the OP has in mind smart machines build smart machines which build smart machines which build smart machines designing fusion power plants leading to smart machines building fusion power plants powering smart machines building smart machines which design 3D printers which enable 3D printers building smart machines which can build an array of 3D printers providing anything that smart machines design including food, shelter, clothing and transportation which might be called cars but are no longer privately owned: you just call them on your web enabled smart machine designed and manufactured digital device subsidized by ads written by smart machines and full of content written and designed to the exact pixel by smart machines.
I’ll disagree. By this reasoning, progress is impossible, as no one can ever actually reach “the horizon.” You’re defining specific technological change as “rainbow-like.” You can’t get there.
But… We can get “there.” It was decreed, with great certainty, that machines could never play chess. Then that they could never play well enough to beat a good player. Then that they couldn’t beat a master. Or a grand-master. Or the world champion.
The horizon theory could work for the Singularity. The rate of change changes, no one notices the rate of change changing as it changes because we’re like frogs being boiled, the rate of the change gradually increases, but not enough that we notice it, but in thirty years’ time, things are HAPPENING baby … but by then we’re used to that. We take the rate of change for granted, cause we’re frogs, baby. We think things may get REALLY hot in thirty years … meanwhile, the bubbles are popping furiously all around us.
As I understand it, what drives the acceleration is previous achievements. We stand on the shoulders of giants, etc. The printing press begets widespread literacy, insurance makes commerce less risky, more common, clocks allow mariners to positions themselves without being in sight of land, telegraphs, telephones, TV, the Internet, virtual reality, who knows what else? Can anyone deny that the Internet is qualitatively different from other forms of communication. Shit, we’ll all be rich if we ever figure out how to make money from it without being a megacorporation.
China has enjoyed real GDP growth topping 7% for the past 25 years or so (WAG). I’m honestly not sure how things feel to a Chinese peasant born factory engineer.
The US has been at the technological cutting edge so our smoothed potential growth rates have always been capped somewhere in the 1-4% range. It’s plausible that something like a sustained 12% growth burst due to singularity could feel very different.
Oh sure, they are going to try to do anything they can.
Right, I think we will see lots of incremental changes like that over the next 30 years.
The plasticity of online “technology” makes me nervous. I first used Facebook in 2008; now it’s a major cultural force. Not only that, but it’s technology that you have to learn to use, like a computer program. It could also be challenged and go away overnight, and we’d have to learn another system. I don’t use Pinterest; I’d have to learn the ins and out if I wanted to. I’m absolutely certain that we are going to see wave after wave of changes online, since change can so easily happen. This is pretty much faux innovation on the technology level, but it can bring about large social and personal changes. It’s technology that can jerk us around pretty easily. Case in point: businesses putting all that work into building up Facebook “Likes,” and now Facebook changes its algorithms, making all that work pretty worthless.
The trouble is that the system, granted sufficient power, would only need to think, “Kill all humans!” for a short period to, well, kill all humans.
Pace fun-but-dumb movies like The Terminator or Elysium, humans could never defeat strong AI that was able to manufacture weapons. It wouldn’t be be hulking robots that would come after us; it would be fly-sized drones exploding near our temples and killing us without our even noticing it was a threat (scarily, this technology already basically exists). Or nanobots shutting down the brain. It could be clean and painless and instantly over.
We just don’t know how a sufficiently intelligent system would act. That would be out of our control, and attempts to make it “nice” might or might not work.
I’m inclined to believe that strong AI is not possible using digital technology. I don’t think it’s possible to write strong AI software. I once read that Microsoft Word has a 1,000 man-years of code in it, and it’s just a mediocre word processor. Putting together a major video game these days costs as much as a major motion picture. Nor do I mean to imply that man-years and money are the only barriers to achieving the goal, but those barriers are also very big on their own. We simply just don’t have a clue how to do it.
My guess is that to do it, one would have to create artificial life, and I think it would be tough to get that life as smart as a human.
I think the progress AI has made is both overrated and underrated. Combining great chess algorithms with opening libraries, etc., and fast processors is quite an achievement. But it’s not strong AI and bears no resemblance or relation to it. OTOH, a dumb pocket calculator from the 70s was already a massively better calculating entity than any human–because humans suck at crunching numbers. I was amazed when I saw things like search engines and online map software in the 90s–I didn’t think such a thing was even possible at that time. All that stuff is really amazing. It’s still underrated in my book. But computers than can have an experience of meaning and think autonomously? We really haven’t even started yet.
Still not seeing the need for a strong AI here. If we can come up with hardware and/or software that sufficiently augments the human mind, which the Internet, search engines and news agents arguably are, we might create people who are, in essence superhuman, in terms of their ability to conduct research. We might even figure out a way to make software that could help human solve problems. This could drive a Singularity or something like it … thousands of artificially-enhanced DaVinci level scientists at work every day might just make the hockey sticks happen.
In 1968 I wrote a story for my high school science magazine that was basically about the singularity - I predicted Amazon also. But I hardly invented it. Anyone seeing the early part of any technology curve - before it flattens off into an S curve - is predicting a singularity.
It is a general scheduling problem, and I published papers on doing this for a very specific application 34 years ago. There are exact solutions, but it is an NP hard problem, and our research showed that you can often get optimal solutions with linear time algorithms. Not always, of course. Pretty standard stuff by now.
People haven’t drawn schematics in decades, except for the library cells that get used as the building block of a technology. Logic synthesis tools map a high level design language into hardware. In my specialty I need to understand netlists, but even a decade ago I met good designers who had never been exposed to them, and didn’t understand the way you traced signals in one.
30 years or so you could print the netlist for an entire board without using too much paper. Today I doubt we have enough paper in the building to print the netlist of one of our processors.
I’ll believe we have hit a singularity in chip design when you could give a system the requirements and constraints for a new chip and it goes and designs a good one. We’re not even close to being there yet.
I’ve been in the semiconductor industry about 35 years, and I beg to differ. Processor design cycles haven’t changed a whole lot in the last 20 years or so. That is totally due to the explosion in processing power that lets you launch thousands of simulation jobs at once and lets you generate test cases that no human would think of or have time to generate. Getting wafers through a fab at the most advanced technology node takes longer than others.
Smaller chips go faster. When I worked in ASICs you expected a respin - not they usually work the first time thanks to more simulation power. A friend of mine started work for a major telecom chip maker, and his description of their process reminds me of the way new watches get introduced by Seiko - you hit your deadline or you don’t have a product.
And new chips evolve from old chips - look at any roadmap - so I think these evolutionary steps are vital. Totally new technologies are expensive and risky - look at Itanium.
The slowing down of Moore’s Law is due more to economics than technology - new fabs are so expensive that it does not make sense to build them every year and a half or so.
Since we got Galaxy S4’s with decent processing power my wife has been dictating texts using voice, since it is faster and more accurate than typing. And yes it gets some really obscure stuff right.
It does appear to learn. I text my wife before I leave work, using a few standard phrases, and even when I type it it has learned what the next word is, so that all I have to do is click on two or three suggestions in a row to finish my text.
Even when I took AI in the early '70s it was realized that there had to be a lot of semantic knowledge to make voice recognition work. As far as I can tell, modern systems has this working pretty well.
I basically agree with what you’re saying. My quibble is that the technology you describe sounds like a strong AI equivalent. Or, put another way, if these scientists are making the hockey stick happen, then at that point presumably they could invent strong AI or prove that strong AI is impossible.
Not sure we really disagree. At least, I didn’t disagree with what you wrote (and I easily concede you know a lot more about this stuff than I do). I just said it was harder to design a new chip, not that it was necessarily slower or that it wasn’t getting done. My ultimate point was that strong AI would speed up the design cycle near-infinitely (per Kurzweil and other boosters), whereas now technology has not allowed us to speed up the cycle.
That really is a technological matter, however. Our technology is not, on a net basis, making it cheapier and easier to create new chips (yes, it makes it cheaper and easier than if that technology did not exist, but it does not net us a cheaper and easier design cycle, and thus no acceleration in the rate of change in that area).
I suppose it’s no surprise that Google and similar companies would be in the lead here. No one else has access to such a vast store of human knowledge and behaviors.
I don’t think you need strong AI; just a highly-evolved specialized AI, like Watson or chess programs.
Huge segments of the time in chip design is doing really stupid, straightforward stuff that happens to take forever because the chips have billions of transistors. Layout is the the process of arranging the chip subunits while maintaining timing and other constraints. Humans are really good at it–for small chips. But past a certain threshold, a human just can’t keep the whole thing in mind at one time, and you end up needing automation. But the automation kinda sucks and it still takes a long time.
A specialized AI that was really good at just that task would be a huge boon. Likewise for routing and other tasks.
Sometimes I feel like we’re wasting a huge amount of potential just by working with transistors. We aren’t smart enough to build anything much more complicated than a transistor except by assembling smaller components. But why can’t we build a NAND gate, or even an entire adder, as a single “blob” of structured silicon? Well, our tools aren’t good enough right now… but maybe they could be one day.
Basically he’s saying that there will be super-intelligent AI and transhumanism where people could upload themselves into super-fast computers. Also there would be nano-technology and machines that can produce just about anything (like in Star Trek). The AI’s and transhuman intelligences would allow many discoveries to rapidly be made and nanotechnology would allow these new discoveries to rapidly be turned into new technologies.
Relevant to the OP:
IBM is investing $1 billion in cognitive computing via Watson. Cite. Early last year they announced their first paying customer, so we are in the early stages.
Will this be a nothingburger? Will IBM recoup its investment by 2018, when one investment analyst claimed that Watson would make up 12% its revenue? In 2044, will it be remembered like the Apple Mac is today or will it be like the Apple Lisa or Microsoft Bob? Or will it be something bigger?