Straight Dope 2/24/2023: What are the chances artificial intelligence will destroy humanity?

That’s not necessarily true. A neural network with significantly more “neurons” than there are real neurons in the human brain - say, 8x, or 16x, or 32x, etc - would be able to model a human brain more fully, including multiple properties about the state of each neuron.

Our current neural networks haven’t even reached 1x, so we have a long way to go. But, in principal, it wouldn’t be impossible. It’s just an engineering problem.

Also, it’s doubtful that the most efficient way for a silicone processor to model a human brain’s function is to literally model all individual neurons with information about multiple properties of each one. But if that’s what was needed, there’s no reason that it shouldn’t be possible.

Theoretically, we wouldn’t NEED to.

Let’s say that there are some analog properties relevant to the function of the human neuron that mean a brain works just fine in “wetware” but fails in to be modeled in hardware. There is some arbitrary network size that would be sufficient to model the entire brain, with the state of each neuron, including this mysterious property. Maybe it’s 2048x or 4096x as many “neurons” as a real brain has rather than 8x or 16x; fine, that’s still just a solvable engineering problem away. Maybe it pushes us back a couple decades, but it isn’t a hard limit.

The point is, with something like a NEAT algorithm (a neural network evolution algorithm that changes the network itself as it goes), or a descendant of that method, we don’t NEED to figure out what the missing property of the human brain that we’ve failed to model is. We simply need to get a computer powerful enough to handle a network big enough to model this property, and then we need to let evolutionary processes do what they’ve already done, what we know with certainty they can do.

Why would we need to? If we wanted AI to run a fruit fly body, we’d create it the same way nature did: modify it at random, keep the modifications that improve performance, and remove the ones that don’t.

So yes, your first gen fruitfly AI will writhe randomly on the ground. Your 5th gen fruitfly AI will writhe towards some fruit. And your 25,000th gen fruitfly AI will fly circles aroud real fruit flies.

I assure you Little Ed is not a sock. He is a real individual with a mortgage, wife, kids, etc. However, asking him to do some typing on one’s behalf is, as you can see, a risky business, leading well-meaning parties such as yourself to jump to false conclusions. I have administered a gentle thrashing and am hopeful, if not entirely confident, this won’t happen again.

All in due time, my friend.

I’ve bolded the contradictory bits in your statement. San Bushmen very much rely on each other to survive, they don’t go it alone either.

At some point you do need to actually know what you’re trying to model. At a minimum eliminating the vast, vast phase space of how neurons potentially work for how they actually work eliminates 99+% of the trials your neural network has to be put through. IMHO, you’re being too casual about the supposed power of genetic algorithms to be black boxes out of which useful results magically appear.

I was simply using fruit flies as a hypothetical test model because they’ve been examined down to the genetic level for decades and for a complex organism are comparatively well-understood. My point was, let’s test your confident assumption that you can successfully model intelligence on a far simpler challenge. If in fact your genetic algorithm fails to converge on even a functioning fruit fly, than clearly one or more starting presumptions were incorrect.

Certainly not for an extended period of time, but I meant that someone native to these areas would be able to survive alone for some time (eg on a hunt, scouting during conflict, etc) whereas a city dweller would struggle to make it even that long. But this doesn’t damn the entire enterprise of coty building.

You are correct that hunter gatherers rely on one another to survive in a harsh environment, and this kind of cooperation is a skill that has atrophied to an extent as well.

Sure, and I’m sure there are ways of kinda-sorta modeling the chemical gradients and the like too.

But no-one’s doing it now.

Which designer knew what the genetic algorithm contained in our own DNA was trying to accomplish ahead of time?

I’m confident that it can be done, because us sitting hwre disucssing it is proof that it has been done.

Right, and there is no AGI available now.

Except that for our DNA to do it’s business, you have to already have a complete functional living cell- the step which in the case of neurons you insisted wasn’t necessary to specify. Now obviously between three and a half and four billion years ago, some iterative process got us from organic molecules to the first fully autonomous bacterial cells, but that almost certainly was a different iterative process than how evolution subsequently acted on living cells. Put it this way: abiogenesis obviously happened, and the development of intelligent organisms obvious happened; but it’s very doubtful the two could have been done as a single process.

Sure, that’s true, thanks for clarifying. I agree.

Moving all that matter around would waste a very significant fraction of the available fuel in those stars. Instead, just pick the longest lived star (a red dwarf) and build your collection swarm around that - a small red dwarf could last for trillions of years.

I would suggest building collection swarms around all the nearby red dwarfs, but that could fall foul of the objection raised by Tibby here;

The restriction imposed by the speed of light in our universe will always lead to paranoia - each Dyson swarm would be worried about all the others are doing, behind their cloak of light year separation.

And letting them burn away hundreds of lightyears away wastes ALL the energy.

You don’t need to send anything nearly as intelligent as yourself to head to a nearby star, harvest its resources, and send them back; but that probably IS the biggest reason for an AI to avoid doing this.

Yeah. Your probe might upgrade itself and build its own Dyson swarm. With hilarious results.
At least we can console ourselves with the thought that our AI overlords are just a mistake away from destroying each other by accident.

We aren’t trying to model the entire process though, or even the finished product. We are trying to evolve something new, from scratch, using the same basic principles, to achieve a similar end result. And by doing it virtually, we don’t need to wait billions of years for the process to complete.

Except I was under the impression that we were discussing developing artificial intelligence by emulating the function of neurons in brains. If we’re not, then we’re back to trying to create general intelligence from first principles.

Why do you assume that an intelligence will necessarily have the same drive for survival that we (and all animals) have? We likely have the strong drive for survival because of the way life on Earth evolved. That doesn’t mean that anything that is intelligent will be hell-bent on continuing its existence.

(Even some humans decide that it’s better not to continue their existence, and act on that)

“I exist, therefore I must do everything in my power to continue existing” is not necessarily a must for something to be deemed intelligent.

That’s what we are doing, but that doesn’t mean the final product has to resemble a human brain in form (even if it does in function), nor that it has to follow the same process to develop.

Aside from the deep dive function and theory discussions here, my impression is that at least a slight majority of posters in this thread agree that there is a possibly of some threat to humanity.
Also, evidenced here is that there are many unknowns about the progression of AGI when it finally awakes and acts independently.
As stated up-thread, developers were debating whether they should keep it caged, or let it out.
Right now, GPTChat and similar AI is being introduced to the masses with sometimes mixed results.
We don’t know for sure but it’s probably already out in the wild in some form. Keep in mind the Internet of Things is connected to government, industry and many people’s homes.

My unsolicited advice is to err on the side of caution.
Instead of being swept up in awe by the technology, perhaps people need to demand that developers put some safety mechanisms in place while we are still the decision makers.

Maybe not, but after a significant period of time only entities with such an imperative would continue to exist. If you create an entity without the goal of continued existence, it wouldn’t last long.

A more useful goal would be ‘I exist, therefore I must do everything I can to increase my knowledge about the universe, and to increase the processing power which I can use to examine that information’. Hopefully an AI with curiosity would not be so enthusiastic about destroying humanity. We could make good pets.