Artificial intelligence (singulairty) within a few months?

Overall I’m sympathetic to your POV, though I might quibble a bit here and there.

As to the bit above, the novel thing that is coming for the first time is the logical possibility of the machines having goals of their own, and not really needing human oversight to perform their function.

Yes, an abandoned wooden waterwheel will continue to turn until its bearings get too draggy or the stream silts up or changes course. But that’s of a fundamentally different nature than an AI that decides to pursue its interests, not its owner’s interests. In one sense that’s not novel, managers and workers have been doing their own thing on company time since forever. And owners have been trying to monitor and prevent that since the very next day. In another sense it may become a lot like battling aliens in an SF story; the enemy thinks so differently from us as to be far more inscrutable than any enemy we’ve ever faced before.

Perhaps the largest thing is not the idea that AIs will go rogue and pursue their own interests. It’s just that they’ll be vastly better at enabling their owners to pursue their narrow interests at the expense of everybody else.

To use a current example, Musk’s recent maneuvers at Twitter do not seem pro-social, whether we’re talking about his employees, his customers, or his suppliers. Nor do they seem very economically logical even from his POV. Broadly similar arguments can be made about Putin, just substituting political power for economic.

Handing the rich and powerful an even bigger stick to beat society with seems … unwise.

No, the pundits are not missing anything. You are just using the word “singularity” differently to mean something much vaguer.

You seem to contradict yourself here. But your second sentence is correct - we are not changing our mental hardware and operating system. A baby born today has exactly the same innate mental capabilities as a baby born at the dawn of human civilization. Our culture is dramatically more advanced, but every new baby has to learn everything and implement technology using the same hardware and the same innate intelligence.

So as remarkable as the evolution of language and culture has been thus far, improvements attributable to communication and cooperation do not equate to the kind of runaway positive feedback of self-improvement in intelligence and hardware that is envisioned in the singularity.

I’m mystified why you seem to want to insist that the future cannot possibly involve any new process that is qualitatively different from the past.

But will you when they start insisting on being referred to as “protectors?”

Or, it may conclude that the most rational option is divorce.

Alessandro Tomassini. A cognitive scientist at Cambridge maintains that no software program could ever become conscious. A highly parallel. self-referencing physical network is required. Makes sense to me.

And that needn’t be done by openai or Microsoft. A clever hacker could write a compiler that produces native resident code. He could embed that in a program that prompts GBT to write programs for it. Let’s call his program Alpha. Alpha defines a program for GBT to write. Alpha then compiles and executes the resulting code as a part of itself. A program that doesn’t require a super-computer.

As you point out an initial success would be crude, but attempts would be made to improve it and at some point it becomes regenerative. Perhaps Super-regenerative.

I certainly know people who have implanted certain RFID/NFC devices. A bit of a step from that to jacking stuff into the brain, but, just saying.

I suppose it depends on how freely available AI is to the masses. Which could also be…unwise.

Already done, right? You will notice how, e.g., Openai charges money for their services (not a problem for the rich and powerful), and that their products are closed-source. Also, those $150,000 workstations (to buy or rent processor time) seem expensive for “the masses”, but that’s less than peanuts to a corporation.

I think that the book The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do by Erik J. Larson is a good explanation of many of the issues discussed here.

Right, the point is not whether there is change, the point is whether we can keep up with those changes.

Right now, it’s a bit humorous to see an 80 year old struggle with modern technology, but it’s less humorous to see a 20 year old do so.

When the AI can learn faster than a human, then humans, by definition, can no longer keep up with the changes.

We can only observe and hopefully benefit, but we will not be participating in the new economy that results.

The problem I always see with that is that hardware takes time to produce. Your ASI may have designed a new chip, but it has to be sent to TSMC and wait six months to a couple years to receive the new design.

I suppose if you have vertical integration, and the ASI is also running its own chip fab, you may get such improvements, but it will still be limited to the capabilities of the fab. If it designs 4 nanometer chips, but only has hardware to make 7 nanometer chips, it’s going to have to wait a couple years for ASML to ship it the hardware.

The hardware feedback loop still relies on being at the very tail end of a huge logistics chain that still involves a number of humans along the way.

That aspect of the singularity is still reasonably far away, and there will be specific criteria that will need to be in place before it is possible.

However, how much an AI can improve its intelligence through software alone is unknown, so it may be able to achieve super intelligence without the hardware upgrades.

That’s kinda the point. We know that exponential growth is unsustainable indefinitely, and the singularity is the point at which it breaks, and something else happens. What that something else is, we have no idea.

When the stuff outside our skulls is smarter than we are, and we no longer have anything significant to contribute to it.

And I see that as a much greater threat than actual ASI takeover. It’s not that the AI battle bots choose on their own to rise up against us, it’s that they are directed to by the wealthy elite rulers who no longer see any need to keep billions of potential competitors to their utopia around.

Are we handing it to them, or are they using the wealth and power they already have to make it themselves?

Well, yes, at birth, a baby hasn’t received any of the upgrades yet. But a human with, say, a smartphone does not think in the same way as a human without one.

Of course the future will involve new processes. All of history has involved new processes. That’s what’s driven the curve we’ve seen so far.

That’s an issue worthy of another thread.

Is the thin layer if information you get from the net adequate for any practical purposes. In example there was a reference upthread to junction transistors in an adder. Perhaps in 1960s. By the 70s it was all planar and by the 80s planar MOS and 90s planar CMOS. A book on the topic would have covered all of that.

Is a human with a smartphone thinking better than a human with a book?

A couple of years? We designed huge processors that pushed the technology at TSMC, and it didn’t take a couple of years to get stuff back. In any case chip design is going to smaller chips with 2.5D integration.
Well, it took a few years if we screwed up, but I’m assuming an AI wouldn’t.
And yeah, you need airplane pilots to fly you chips back, and truckers to bring them to the packaging plant, but that just means you can pull the plug in many ways.

My understanding of the Singularity is that the AIs would design new AIs without the help of humans, and that humans would no longer understand them. We don’t understand a lot of the AI now, but they are nowhere near being able to design chips without help. On the other hand we can’t design chips without the proper EDA tools either.

Does he explain why it is impossible for a program to be self-referential, if it had access to the appropriate internal states? This sounds like one of the common mistakes people make, like being convinced that computers are just fast adding machines.

To be sure more and better hardware will make more and faster AI practical.

But if a given level of AI can think as well as a human and 100x as fast, there is a LOT of room for the AI to get smarter before it gets materially slower than a human. And in concert with it’s “friends”, they can certainly communicate ideas between each other faster than we can, and can initialize new units to fully functional “adult” capability faster than humans can. So the net effect is that a group of AI’s can still outrun humans and improve continually even if they aren’t able to fully advance the state of hardware production and installation as fast as they might be able to advance the state of hardware conceptualization & design.

So although I agree in principal w @k9bfriender’s point that humans will have a hardware-based limiter on AI improvement rates for a long time after the initial “singularity”, I don’t think it’ll be quite the stranglehold he proposes it to be.

{Gets on his favourite soapbox, clears throat…}

[ETA: On closer reading I see you were talking about achieving human-equivalent artificial intelligence, not about the old question of whether or not AI is actually intelligent. Which is a more difficult and more nuanced question, but I still think some of my original comments here apply. Just because we know in principle how we’ve achieved apparently intelligent behaviour doesn’t mean that we cannot build on those techniques to eventually match and then exceed human intelligence. I don’t think it’s that far off, it’s just that it will operate in relatively narrow domains of knowledge. Artificial General Intelligence is a whole 'nuther matter!]

AI pioneer Marvin Minsky was frustrated with this perspective even in the early days of AI. Skeptics would say things along the lines of “computers will never be able to do x”. Then when computers proceed to do x they marvel at it for a while, but then when some of the techniques used to build AI systems are explained to them, they conclude that it was all a trick.

But even if one agrees with what the skeptics saw as a valid point, this is becoming increasingly untenable with today’s AI systems because “explaining how they work” is only really possible at a high level of abstraction, and a priori predicting how well they ultimately will work is well-nigh impossible.

I think it’s worth mentioning that a huge obstacle to AGI isn’t just the formidable challenge of achieving it, it’s the question of why on earth we would want to. All of the impressive AIs built in recent years like IBM’s DeepQA and GPT have been justified and funded with an eye to eventual commercialization. It’s not clear what value an AGI would bring compared to more and better narrow AIs that are increasingly powerful and exceed human intelligence in their specific domains. So for purely practical reasons of where we put our resources, if nothing else, AGI probably belongs in some distant sci-fi fantasy future and not in any foreseeable one.

It’s extremely misleading to characterize any modern AI system as “following instructions”. Very broadly speaking, the “instructions” that we write serve only to create a framework such as a neural network that can be exposed to various different learning paradigms that establish and optimize its responses to the inputs on which it’s trained. Much like a human brain, the final product has capabilities arising from the training it’s received and not from any explicit instructions it’s following. Also like the human brain, it could in theory be built to continue learning so it becomes better and smarter over time. One can still have a boring semantic argument about when we should call such behaviour “intelligence”, but it is NOT just “following instructions”.

Until in a while somebody produces a rival that is open source. Remember all the buzz about Dall-E 2? And the expensive renders at their site, if you ever got off the waiting list for their tightly curated site? Except that Stable Diffusion came along as open source, free to use, free to download, free to modify? Nobody is buzzing about Dall-E 2 anymore, it became an instant also-ran. There is no reason that the same thing cannot and will not happen with LLMs. The interest is there and sooner or later that cat is getting out of its bag

Yes. Like you, I have no problem with AGI being possible in principle. After all, you and I are existence proofs of it. And like you, I dislike the historical moving goal posts of “But that’s not AI”. I don’t know how much that tired argument is still being trotted out by anyone but the clueless or the religious. I’m suspecting it’s finally getting shop-worn enough to retire.

Having said that, ChatGPT is a long way short of human-level AGI. Both in terms of level and of generality.

I don’t know that I agree with you that AGI won’t be a topic of research and a goal for commercialization. Whether AGI first emerges from those efforts, or as an emergent property of more goal-directed efforts towards “artificial special-purpose Intelligence” = “ASI” remains to be seen. I’m open to either possibility.

The GPT program does not contain a ‘neural network’ that learns. It contains a weighted network that allows it to interpolate between known outputs. The interpolation trajectory is determined by the stored weights. The weights are interpreted by the stored instructions. In the released program the weights do not change.

None of this is news to you. So, what other than follow instructions, is a stored program computer able to do?

It’s not that software can’t be self-referential, the key factor is it must be a physical network to achieve consciousness. That doesn’t mean chatgpt couldn’t cause a heap of trouble, but there will be no internal observer of it. He believes the network most likely to gain consciousness is the internet.