Have we finally hit a wall as a species?

“Hitting the wall as a species”… really? The examples are more about the economy/market dictating improvement behind the scenes, or reverting to incremental change at the consumer-facing level, when it comes to some particular aspects because they are quite simply are that difficult and/or the ROI is too marginal in the short and mid term from making radical leaps. If anything, the speed of *consumer-facing * and/or high-media-visibility tech development in the period around the World Wars and Cold War was the anomalous thing and an expectation that it should continue at a wild pace was not justified.

If they cant get a cancer cure, improve the chemo/radiation crap, no way to live a life!

Some people.

And as for the thread itself and participants, you’re all too far over my head for me to be sarcastic about any of it! :smack:

How do you feel about getting beat up by a 7-yr old?

Isn’t most of that along the lines of actually being able to take advantage of AI concepts pioneered decades ago though? I mean, I always have thought that a lot of this deep learning/machine learning stuff was thought out a LONG time ago, but just has never had the computing power, networking infrastructure or quantity of data available to make it a viable reality. That’s not to say that it won’t be revolutionary for some things though.

Nobody’s cracked the general AI nut yet though; that would be the BIG advancement in AI.

I suspect that whatever the next major change is, it’s going to come from some direction we’re not really thinking about now.

Incremental changes in existing tech are easy to see – which of them will work is less so, of course, but ‘AI will get better at conversations with humans’ and so on is just a bit of extrapolation from where we are now.

Significant changes in how people’s lives work is another matter. People in the early 1900’s were unlikely to predict changes in daily lives brought about by the common prevalence of cars, although the cars were already there. People in the mid 1900’s were unlikely to predict changes in daily lives brought about by the common prevalence of televisions and air conditioning, although televisions and air conditioning already existed. People in the late 1900’s were unlikely to predict changes in daily lives brought about by the common prevalence of internet and cell phone use, even though some form of the technologies already existed.

The computing power to support accessible user interfaces has also made new tech usable by nearly everyone. My parents, in the 80s and 90s, quite literally could not program a VCR, clock radio, or use a computer or email because it was ‘too complicated’. They are much older now and sure haven’t got any smarter :wink: but they are on social media from their own computer, own smart phones, book Uber rides, book vacations on Travelocity, do taxes online, etc all due to the user-interface revolution.

There was a big breakthrough around 2006 by Hinton and his team that triggered the recent explosion in neural network effectiveness. They developed an algorithm that could train deep networks.

Prior to that, deep networks (many layers) couldn’t be trained with back propagation due to an increasingly magnified impact of error correction as you travel back through the network. It basically made these networks seem impossible to train (the problem had been worked on since the early 80’s). While there were impressive results with evolved deep networks in the image recognition task (top spot was a 6 layer evolved network), without a training algorithm it wasn’t something considered practical or efficient.