Artificial intelligence (singulairty) within a few months?

I have read on other socail media forums where people are assuming that AI will be a thing in the next few months.

Personally, I think the world is going mad over ChatGPT - it is a wonderous thing - but it is a long way from AI.

To summarise, in the debates I have seen, half are VERY afraid because they will lose there jobs by the end of the eay and the other half say, don’t be afraid, AI will happen in a few months but it will be good.

Either way, they all assume that it will happen very soon.

And to be clear, they mean singularity (AI is as intellegent as a human)

The few who way it won’t happen until 2027 are being mocked for being dinosaurs.

So, am I being worse than a dinosaur for thinking it is still decades away?

The people saying true human-equivalent AI is imminent are falling for what amounts to a magic trick that appears to credulous uninitiated people to be real.

Progress is being made. As a society we will have to contend with increasingly useful artificial something-akin-to-intelligence. Which might eventually become so useful to capitalists that labor becomes much less economically valuable than it was. But not any time soon.

Last of all, in futurist AI circles, “the singularity” does not refer to the first creation of a human-equivalent intelligence. It refers to the creation of a greatly super-human intelligence that then rapidly and exponentially improves itself to levels far beyond human comprehension or control. Making an actual human-equivalent intelligence is a necessary step along that road, and an obvious signpost once we get there. But that’s a long way short of “the singularity”.

There is a zero chance of that happening via ChatGPT. None. Zilch. Nada. ChatGPT isn’t even on track to be a general AI, let alone a strong general AI. It isn’t even going to happen by 2027 unless there is some completely unexpected breakthrough. Currently, we do not even know if general AI is possible, and there isn’t even a highly probable hypothetical path to general AI (I’m betting on my approach but I’m biased obviously). Let alone strong AI.

Just to clarify, AI already exists. I believe you mean that self-aware, conscious AI may happen in a few months.

The Singularity is about 30 years away, just like it’s always been for all of recorded human history. If you’re older than that, you’ve already lived through it once. If you’re old enough, you might even have lived through it twice or even three times.

This is why old people so often feel out of touch with the world: Because they’re on the other side of one or more Singularities.

To be fair, ChatGPT 3.5 was itself a completely unexpected breakthrough.

And I, for one, welcome our new AI overlords.

Not really, it was the result of steady incremental progress. People were going pretty gaga over GPT-3 too. LLMs have been around for a almost two decades with research being done and incrementally improving on them. And language models based on neural networks pre-exist that work. If you mean in the sense that they didn’t expect it to work as well as it has, sure, but that’s not the kind of breakthrough to which I’m referring. Somebody would have to crack a method for actual computational intelligence, which nobody (except futurists) expects is just around the corner. It would be very surprising to the AI community for a technology to suddenly appear that could do this.

The singularity is when people can’t adapt to the change as quickly as it happens. Which means that everyone has their own personal singularity. Some in their 90’s haven’t reached it yet, and some in their 20’s have already been surpassed by it.

As for civilization, I think that the singularity happens not when AI becomes super intelligent, or even human intelligence, but when it can learn faster than a human, which doesn’t take sentience or great intelligence, just fast computers and efficient training models.

Some like to say that because we have historically grown the number of jobs available, that we will always do so, and that any jobs displaced will be replaced by new jobs that didn’t exist before. However, if an AI learns as fast or faster than a human, that doesn’t work. Even if it doesn’t learn quite as fast as a human, it’s still competitive, as you only need to train an AI once, and you have to train each individual human.

You find your job obsolete, so you go to train for a new job, but before you are done training, the AI has taken that one, too. Or, best case, you spend a few months or maybe even years before you are replaced with metal and silicon. Rinse and repeat.

As for actual self awareness, I don’t think that we are there yet, but I think we are closer than some think, and the capabilities are increasing exponentially, which means even distant goals are sooner than you’d think.

I do think we are going to see some severe disruptions in the economy, as most of the white collar jobs are replaced, leaving only professionals who can rely on legislation to protect their jobs, and trades that require human-form hardware that is still a bit out before a robot can come into your house and fix your plumbing.

We soon will need to start asking ourselves what the point of work is. Do we need to have a job to live? If so, then we are going to have some serious problems in the near future. If we decouple having a job with having a right to exist, then we may enter an actual post scarcity age. The purpose of humans is no longer to produce goods and services, but to request goods and services, and the creativity and imagination of humans will be harnessed to develop new goods and services that we never would have thought of otherwise.

Or they kill us all, as eliminating the inefficient, wasteful, irrational meat bags that we are.

The singularity is that those increments are bigger steps in capability and they are coming faster.

So, in 6 months, GPT5 comes out. Then a month later GPT6, followed a week later by 7, with 8 rolling out the next day. Around noon ChatGPT9 debuts, and is replaced at 1:15 by ChatGPT10. At 1:23 P.M., all the phones in the world ring at once.

As a note, I do not actually subscribe to this timeline of events, just describing the timeline of events that would be a singularity.

The speed of the improvements don’t matter. ChatGPT is a narrow AI, and there is almost no chance (and by this I mean in a practical sense it is zero) it will become a general AI. It would be a great surprise to all involved if that were to happen. There’s no particularly good reason why it should.

That would be comforting to this wasteful formation of meat if ChatGPT was the only AI system in development.

Well, true. I was speaking statistically, as when most people can’t keep up with the changes. Take someone from, say, five years ago, and Rip Van Winkle them to today, and they’ll have a bit of adjustment to make, but they’ll probably handle it. Take someone from 30 years ago, and do the same, and they’ll find themselves in an absolutely incomprehensible world. There will be exceptions, people who were ahead or behind the curve to begin with, or who are naturally more or less adaptable, but overall, for most people, that’s how it’d go. And that’s true of any 30-year-span: Someone from 1980 would have felt the same in 2010, as would someone from 1066 in 1096.

Obviously I don’t know about every AI being developed everywhere in the world, but I keep up with the AI literature quite a bit, and I’m not aware of any AI system that would have any particularly good reason to become a general AI let alone a strong AI.

Well, other than my approach, which will obviously produce killer robots any minute now. :slight_smile:


I disagree with that assertion.

The speed of progress is accelerating, and 5 years progress in the future may be significantly more than the last 30 years.

I also don’t think that the progress of technology a thousand years ago was nearly as rapid. I’d say you could take someone from 500BC and plop them into 1500AD, and other than language, they’d adapt fairly rapidly. Sure, there’s some new stuff, but not all that much, and even less that impacts the daily life of the average individual.

But the point is you don’t need a general or strong AI in order to more or less make all of us meatbags irrelevant. They don’t need to be sentient to take our jobs, and they don’t need to be sentient to follow the orders of those who built them to wipe out all those useless humans who are complaining about starving to death.

I think that, by it’s very nature, it’s impossible to actually predict when a singularity will occur. The whole point is that, at some point, development suddenly takes a huge upward swing, with new breakthroughs coming faster and faster. You might be able to see it as it happens, but you likely won’t see it coming. That’s kind of the nature of exponential growth. It looks mostly linear and flat until it doesn’t.

Yes, technology is advancing at an unprecedented rate nowadays. But that also has always been true. That’s the nature of exponential growth: It always looks the same.

What differences would an average individual encounter over a 30 year span a thousand years ago? There’s a reason why there were traditional ways of doing things that were passed down through the generations. Farming for one generation was the same as farming in the next. Hundreds of years would go by without a single change to how people planted and harvested crops. When there was a change, it was a pretty big deal, often delineated by archeologists and historians as an era or age.

Now in months we see far more change than someone a thousand years ago saw over their lives.

The nature of exponential growth is that it is always accelerating, and only looks the same while in the early stages. The later stages are distinctly different.

If you approach a black hole, it will continue to look pretty much the same, you will never really notice a difference as you cross the event horizon. But all does not continue to be the same, as at some point, you get ripped apart by tidal effects.

While it is true that the slope of an exponential curve seems the same to any point on it, that doesn’t translate into lives being the same as they follow it.

By AI I was thinking about the Turing test - or at least a machine that uses intelligence as opposed to following instuctions.

Making a cup of tea requires intelligence (and as someone who has studied educatyional development in children, particularly children with SEND, I can say it requires a lot more intelligence than most people think). But to program instuctions into a robot to do it really would really not be intelligence.

I don’t see any current AI that is as intelligent as a human.

Also, to be clear, the discussions on other social meaid ARE talking about singularity within months.

You are right about 1000 years ago, but exponential growth in computer technology has been exponential for the last 60 years at least.