Yes and no. Not a simpler design cycle for vastly more complex designs, but a much simpler one for average designs. FPGAs let you do stuff that would have taken ages and be very expensive about as easily as doing it in software.
But I definitely agree that a singularity would involve strong AIs doing chip design.
20 years ago we had people doing layout by hand. Now layout and routing is totally automated. (Except for cells, which kind of meet your higher level than transistor requirement - and I’m not sure about them.) We don’t have enough people in our division to do layout and routing by hand. But we don’t need AIs - we’ve got plenty of much better algorithms.
Plenty of stuff in microprocessor design is totally automated - the generation of all kinds of tests is another example. That is why equally sized design teams can do designs 10 x the old ones.
8 years ago we were still doing some units partially by hand. These were small, highly replicated cores that ran at high speed and had stringent timing requirements. Of course there was still a lot of automation involved, but there was a reasonable amount of stuff done by hand. By chip area it was a large segment, though by any metric that takes uniqueness into account it was small.
To be honest, I’m not sure if there’s a good way to distinguish a specialized AI from a sufficiently advanced algorithm. The algorithms have gotten better, but in my limited observation they still seem to have blind spots that require some handholding. Maybe these will just be solved in due course, but it sure seems like it will be easier to escape local maxima if a more AI-like approach is taken.
Then again, it would not surprise me at all if the algorithms already use AI-style techniques. Pathfinding algorithms, for instance, are the bread-and-butter of video game AIs, and autorouters must have similar needs. And then there’s stuff like simulated annealing, genetic optimization, and neural nets which sort of straddle the line between algorithm and AI.
I know one branch of CAD pretty well. There have been lots of papers on GAs for this, but none of them come close to what algorithms can do. However, the algorithms all use search space techniques that often have come from AI work, so I suppose you can call them AI in a sense.
GAs and simulated annealing can find non-obvious solutions, but most of the stuff we are looking for is pretty obvious. Just as compilers have gotten better than people writing assembly language in most situations, I think tools have gotten better than hand layout in most also - but there will always be exceptions.
The horizon vs. singularity is an interesting point.
Not unlike crossing an event horizon of a black hole, will we reach a point, technologically, that looks and feels totally normal by way of progression, yet become too late to back out of it as we approach some unfathomable singularity that we must inevitability cross: Where human cognition is inferior to whatever comes next, be it superhuman or AI.
I feel strong AI is possible, but can’t come to any conviction whether or not it would or could overwhelm us as a new “species”. However, if biological evolution on the intelligence of dominant species is any indication, well, our recent common ancestors seem to have gone extinct.
The singularity and concepts like it are among my favorite concepts to ponder and, of course, depending on exactly what definition we go by, we can vary from almost certainty that we’ll achieve it, even possibly that we’re already past it, to almost certainty that it is physically impossible. This sort of question sort of assumes that we choose a definition that we can either reasonably believe is possible or at least suspend disbelief enough that we can acknowledge it. If we go by a definition akin to a technological level beyond our understanding, then we couldn’t even choose predictors because we can’t comprehend what we’re trying to predict.
Either way, in that regard, I think the technological progression presented by the OP is only predictive of a weak definition of the singularity that, in short, we will continue to see technological progression in predictable ways. I think most people around for the advent of cell phones probably would have reasonably predicted that they’d continue to get smaller and more powerful. Seeing progression in terms of greater social media integration and higher connectivity, I don’t think seeing a direction carved out by innovations like Google Glass really tell us a whole lot. It’s akin to predicting that, given that fuel efficiency is a major constraint on car design, and that we’ve seen major innovations in that direction, then predicting that we’ll continue to see more isn’t really all that surprising.
Instead, I’m interested in the sorts of innovations that we just plain don’t see coming. Look at the world we have today, with Smart Phones and the internet. Our way of life has to be nearly unimaginable if it were described to a person from 50 years ago. Hell, in several important ways, our technology is more advanced that even more recent Science Fiction set farther in the future and in other ways we’ve much farther behind than we might have hoped. My example for this is sort of a retrospective, trying to imagine and remember our lives the way they were before these technologies became common place.
For instance, I watched a video just over the weekend where a bunch of kids ranging from about 4 to 13 were given a walkman and asked what it was, most couldn’t figure it out. After being told what it was and given more and more help, some of them figured out how to insert a cassette, and get it to play. Many expressed contempt for how horrible it must have been to live in the 90s without whatever various digital music. And all of this is with fairly mundane and straightforward progressions in technology. I wonder how kids today, used to being able to reach anyone could comprehend trying to do something as simple as meeting up with friends without texting, cell phones, social media or whatever.
Look back even farther at the barriers to communication decades or centuries ago, where news could take weeks to travel. Sure, we can reasonably predict that there will be advances in communications technology as there there has been, but I think the whole concept of the internet and social media is as far beyond the comprehension of previous generations as whatever the next big innovation will be for the next one. Sure, maybe maybe we can imagine a concept like collaborative/distributive thinking, with our brains directly connected with technology and communicating at the speed of thought rather than what we have now, but there simply isn’t a way to predict how that might affect our society. Could it cause greater fragmentation or would it lead to greater commonality? How would trends and memes work? Maybe those very concepts would essentially disappear because the concepts would then be so fleeting.
I also disagree with the assertion upthread that the computer user experience has somehow changed less significantly over the last 10 years than in any of the previous few decades. It may not seem as significant as others if one isn’t fully participating in all of the innovations, but they’re definitely there. Social media has utterly changed the life of people today in ways that are virtually unimaginable a decade ago. Yes, the internet helped connect people around the world in the late 90s, but is the internet of that day meaningfully comparable to the internet today? Facebook, Twitter, YouTube, Smartphones. I don’t participate in a lot of the social media stuff. But how is this not yet another massive change in human computer interaction? Hell, if nothing else, even in the early internet, news spread fast, but it was still compiled and distributed by others. These days, so much more of it is done by people using this technology that just didn’t exist then.
If we’ve done our jobs right – created the AI with actual wisdom as well as intelligence – then it will, at worst, keep us in zoos or as pets.
We work hard to prevent extinction of most animals (smallpox virus notwithstanding.) I figure the AI would have the same motivation.