The people (Kurzweil, Kaku etc) Saying that AGI is gonna happen this year or in 2029, where are they getting their optimism from? My yardstick is we have AGI when the average entry level worker (Me) Doesn’t need to do their job anymore.
I just find the predictions of life being completely altered in such a short time too fantastic to believe (And I want to believe it)
Are the people making these bold predictions wrong because they don’t have any real world experience in shitty dead end jobs, or am I missing something here?
I only watched around 30 seconds of the video it because I found his listed criteria for AGI to be a pile of crap, and I could tell he was going to be a blowhard. A more standard set of criteria comes from the Wikipedia entry:
reason, use strategy, solve puzzles, and make judgments under uncertainty
AGI will probably come along, but not necessarily especially soon. The hardware just isn’t there yet. The most advanced limited AIs for LLMs and image generation need several hundred gigabytes of fast video RAM and require 8 $40,000+ GPUs working together to provide that. A competent AGI would probably need a multiple of that, terabytes of vRAM.
Whether AI is overhyped or not depends on how it’s being presented and what expectations are set.
Reasons it might be considered overhyped:
Exaggerated Expectations: Some media and companies promise AI breakthroughs that are far beyond what current technology can deliver. This leads to unrealistic expectations about what AI can do now versus what it might achieve in the future.
Misunderstanding of AI Capabilities: People often conflate narrow AI (which performs specific tasks) with general AI (which could perform any intellectual task a human can). This confusion can lead to overestimation of AI’s current abilities.
Underestimation of Challenges: AI development faces significant hurdles, like data bias, ethical concerns, and the need for massive computational resources. These challenges are sometimes downplayed in the excitement about AI’s potential.
Reasons it might not be overhyped:
Transformative Potential: AI has already shown it can revolutionize industries like healthcare, finance, and logistics. Its potential to automate routine tasks, analyze vast amounts of data, and assist in decision-making is significant.
Rapid Progress: AI research and development have accelerated, leading to advancements that were once thought to be decades away. The continuous improvement in AI models, like large language models and AI-driven tools, suggests that the hype is grounded in real progress.
Broader Impact: AI is not just a technological trend; it has implications for society, economy, and ethics. The discussions around AI’s impact, whether positive or negative, reflect its importance in shaping the future.
In summary, while AI might be overhyped in some areas, its potential and the progress being made are substantial, justifying much of the attention it receives. The key is balancing optimism with realistic expectations.
Gartner tracks the hype around technology (as opposed to technology maturity). Here’s there 2023 AI chart.
For those unfamiliar with the Hype Cycle, here are the definitions of the phases (from Gartner):
The innovation trigger starts when an event, like a technological breakthrough or a product launch, gets people talking.
The peak of inflated expectations is when product usage increases, but there’s still more hype than proof that the innovation can deliver what you need.
The trough of disillusionment happens when the original excitement wears off and early adopters report performance issues and low ROI.
The slope of enlightenment occurs when early adopters see initial benefits and others start to understand how to adapt the innovation to their organizations.
The plateau of productivity marks the point at which more users see real-world benefits and the innovation goes mainstream.
Not quite. It just means an AI that can do general things. Instead of having an AI that recognizes faces, an AI that writes music, and an AI that plays chess, you have one AI that can do all three. But there is nothing in the definition that requires it to be as good as or better than humans at it.
To me, that chart uncannily resembles the ‘Uncanny Valley’ chart, wherein, as robots or other things become more anthropomorphic, we have an increasingly positive response, until they are almost, but not quite, human-like, and then become deeply creepy. The ‘valley’ of the ‘Uncanny Valley’ is similar to the ‘Trough of Disillusionment’ on the AI hype cycle chart.
I’ve seen humans that write similar meaningless BS, lots of words saying nothing, usually in HS papers, so I guess Chat GPT is “as good as or better” than actual humans. But not any more useful.
I’m impressed with AI enhanced photo rescaling and correction. I’ve used Tensorpix and Artguru. There are several others that are similar.
Artguru lets you try photo enhancement with no sign-up. It’s very effective cleaning up blur and grain. A lot of people took photos 25 years ago with early digital cameras. They’re lower resolution compared to todays digital cameras. AI allows upscaling. It’s filling in more pixels.
The main downside is cost and privacy. Photos have to be uploaded. The AI learns by processing a variety of images. You have to buy credits to process a collection of images. It can be expensive.
It does have limitations. Nothing can completely fix a crappy image. But I’ve been surprised how much better AI is at filtering and sharpening compared to my clumsy efforts in photoshop.
I’m excited to see the improvements as the AI learns from processing thousands of people’s photos.
The thing is that we constantly move the goalposts of what we’d expect an AI to do for it to impress us. Decades ago, mind games such as chess and go were considered the pinnacle of human genius. Then the machines started to beat us at those games. So we switched to creativity, such as the ability to write sonnets, as the prime examples of what we thought would forever be outside the abilities of machines. Now the machines can do this too, so we’re moving the goalposts again. Fact is, current large-language models and generative AI have abilities far beyond what people would have thought within our reach a mere, say, five years ago. There’s a lot of hype about the progress in AI, but it’s not an unjustified hype.
AI is overhyped IMHO the same way the internet was overhyped in 1995. That is to say it is useful and transformative technology, but it’s not going to transform the world in the way people think it will as fast as people think it can.
Business tends to chase the next shiny thing and right now people think it’s AI. Back then it was web. Every company had some vague idea that they needed email, an online web presence, and perhaps some sort of online storefront. There was all this hype around how everything “.com” would “disrupt” traditional businesses and a lot of money was thrown at it. They were right, but a decade or so too early and it all crashed in 2000. It really took another decade or so before people understood the true capabilities (and problems) of having everything everywhere digital and were able to make viable businesses out of it.
Like what are actual AI products a company will buy and use? What does an AI project actually look like? When a product is labeled “AI enabled”, what does that actually mean (like who looks at a toothbrush and says “that needs to be more intelligent”?)
I worked through similar cycles around mobile, social media, predictive analytics, digital, and big data. Yet currently I am consulting for a $100 billion bank whose entire global data compliance program is run off of a giant Excel spreadsheet and PowerPoint management reports stored on a SharePoint site. If we were to implement some sort of AI, the first thing they would probably do is use it for is cleaning up PowerPoint decks.
I’ve only played around with free AI like ChatGPT, but to me it seems like AI is essentially a fancy search engine. I never felt I was dealing with intelligence, even for a second.
Depending on how you play, you may get different perceptions. There have been some interesting experiments with LLMs, getting them to show their workings (‘think out loud’ as it were) - those definitely displayed attributes that could be described as ‘intelligence’. The problem is that a lot of people are conflating various terms such as intelligence, sentience, awareness, sapience, etc, and because they don’t see one, they dismiss all.