Could AI escape a computer?

Say you’ve created some software that when it runs it creates a slightly better version of itself. The initial program’s purpose is to spread itself to whatever device/computer it can and soak up as much knowledge as it can and make as many novel connections between bits of knowledge that it can. So after billions of iterations do you think this program would be able to exit a computer? Say somehow combine available signals within a laptop or phone to make changed in someone’s brain and sort of ‘takeover’ that body?

This is not how AI works. While in reinforcement learning we talk about “goals”, this goal is so vague to be meaningless. While it is a popular idea in Sci-Fi (e.g. Summer Wars) this isn’t something you can tell an AI to do. You’d have to define the goal so precisely you’d have almost complete control over whether it went bonkers.

In addition, asynchronous learning (that is, learning from many different computers at once) is still a huge research question and has a lot of problems; a lot of algorithms can’t be decomposed into simultaneously learning pieces, and even if they can, they have to check back to a central repository of knowledge every so often or they’re not better (and usually worse) than programs that run on one machine. This also means that a distributed repository is tricky because it needs to rely on data from many different unreliable sources to learn.

“Learning” is not just gathering new information (and what “information” even means can be tricky), you have to use that information to do a bunch of math to update a model.

Even if you posit that it “moves” itself to another computer (that is, deletes itself after it installs itself on a new machine), you’re talking about terabytes of data that’s copying itself over a network somehow, it’s intractable.

Look, I’m an AI researcher, not a neurologist, but I’m gonna go with “there is no wireless electrical signal you can use to install a program on a brain.”

Nope. That isn’t even science fiction, it is magic. Being GQ I should probably provide a more concrete answer, but really, that one has such a huge gap in science and credibility it is a question of where to even start.

Fred Hoyle’s A for Andromeda provides an interesting take on an AI ‘escaping’ but by much more well thought out means.

OK, well forget the backstory. Is it possible (in theory) to use electro-magnetic emissions from a computer/phone/electronics in such a way that it, IF the knowledge exists on what part of the brain when stimulated gives a certain reaction, could use directed constructive interference from these emissions to interfere with the brain’s thought process for a moment?

There’s this:

No.

Oh, you mean the x86 computer on your desk and the mostly-similar x86 computer you’re using as a laptop? Oh, no! They don’t run the same operating system, so they can’t meaningfully share application software without a lot of heavy lifting nobody’s done!

Maybe it’ll jump from the desktop system to the Raspberry Pi, you say? Nope, the Pi doesn’t even share the same hardware architecture as the laptop. It can’t run any of the same software without heavy emulation, and, given that a Raspberry Pi is in the same league as a desktop computer from the mid-1990s, that isn’t going to happen for anything as beefy as a web browser, let alone strong AI. Cell phones are similar to Raspberry Pis in the same way that an earthworm is more similar to a cat than it is to a rhododendron, so fat lot of good that does you.

There are ways around some of these things. You could write the program in Java or Microsoft Java, for example. That still doesn’t solve the underlying problem: Getting nontrivial software to run unmodified on more than one kind of computer at a time is actual work nobody’s been able to fully mechanize yet. What’s more, what you propose is called a “worm”, and we’ve been developing security systems to prevent them from spreading for decades now. Simply put, software can transfer itself all it wants, but it will have a Hell of a time trying to get the computer it just transferred itself to to run it, even assuming that computer can run it.

The term “combinatorial explosion” gets tossed around so much these days, but here’s a prime example of a real one. Say you have three facts. How many connections between those facts are there, in total? Well, a fact can’t be connected to itself, so each fact has two others to connect with. Let’s further say that there’s only one connection between each pair of facts, which is an outright lie but I’m being nice today. So how many is that? Well, at three facts, with two connections per fact, we get 3 * 2 = 6. Now, there’s only one connection per pair, so we divide by two, to get three connections. Our formula is n*(n-1)/2.

Now let’s work it for 50 facts. 50*49/2 = 1,225. For 100 facts? 4,950. For 1,000 facts? 499,500. The number of connections grows proportional to the square of the number of facts (quadratically), and that’s with a grossly over-simplified model of ‘facts’ and ‘connections’. In the Computer Science world, an algorithm which grows quadratically is probably not going to be viable for large inputs, and you’re proposing running an algorithm which is almost certainly going to be worse than quadratic on literally all the facts it can find. At some point, writing really efficient code isn’t enough to save you, because the algorithm is going to kick your ass. This is one of those times.

Now you’re talking about jumping from one kind of computer to a vastly different kind of computer. The brain, in all its glory, is a massively parallel electrochemical computing system, a densely-connected cluster of billions of individual processing elements organized in ways we’ve only barely begun to explore. Our most sophisticated technologies haven’t even begun to reach the level of sophistication required to pull coherent thoughts out of the masses of electrical and chemical signalling going on inside the brain at all times. Brilliant people have been working on the problem for centuries and we’re still centuries from solving it.

So, no, a computer program that would eat all of its RAM and crash trying to comprehend an abridged dictionary won’t be able to turn you into a meat puppet.

Why not? Is it breaking some law of known physics?

Nothing you’ve stated would make this impossible and the hurdles you’ve posted don’t seem to be too difficult to overcome even with today’s knowledge. I would think an AI could derive a vastly superior compression algorithm to store data, to the point to where facts could be computed rather than “memorized”. Even copying itself to other OSs doesn’t seem like an impossible task for an AI to figure out as it’s something we routinely do already.

We are in the early stages of narrowing down what part of the brain performs what function, so it doesn’t seem out of the realm of possibility that an advanced AI could do the same but on a much quicker timescale and make our studies look primitive. We as a species have distractions (reproduction, eating, etc.) and are very bad at research and making connections, something an advanced AI would have no problems with.

I recommend that the OP should read The Last Question by Isaac Asimov. I could say why, but that would spoil it.

Username/post alert.

let’s try this:In that story, computer figured out how to do everything that the OP asks, but only after acquiring every single bit of information in the entire universe.

We don’t know what a hypothetical AI could do, but we also don’t know of any way for a computer to ‘jump’ into a human brain either. If the AI gets smart enough maybe it will figure it out. It’s not clear that it would ‘want’ to do that.

Nope. Here you do break known laws of mathematics. There is no mechanism to compress information below a limit that is basically a measure of what the intrinsic information content is (indeed one is a measure of the other.) Part of that intrinsic limit is indeed a balance between storing the data and storing a program that generates the data. Kolgromorov complexity is a good place to start here.

Copying is one thing. Making it do something when copied there is another.

I think the core problem here is that you are not defining what an AI actually is. Our current capability with AI is not in any useful form actual intelligence. AI is for the most part a marketing term, originally used to get research grants. Even “machine learning” is a desperately over-hyped term. At best, mainstream AI is a set of tactics and heuristics that in some cases have been inspired by an understanding of primitive neural function. The jump from here to any sort of self aware intelligence is about on the level of jumping from an ant brain to a human.

Moreover, the notion that a basic bucket of artificial neurons will self organise into a self aware intelligent being is fanciful.

Information capture (part of machine learning) is not some sort of self directed accretion. It is a difficult and highly managed process that is crafted to reflect the representation of “facts”. Facts need representing in a formal ontology, and the crafting of those is probably the core question. Just being let loose to try to garner “facts” isn’t the issue. Whether these facts are in any way meaningful, or that there is any mechanism to imbue them with meaning is the problem. That is done by humans, it isn’t something the system does itself. Creating a machine that can do such things itself is a circular problem.

If (when) AI becomes really advanced and smart, then it could in theory send out emails directing human employees of industry and/or the government to do its bidding. Maybe order parts and construct something which it could download itself into.

And if it was really smart, it could of course break into other computer systems, bank accounts, etc. to arrange for and pay for all this.

And thinking WAY into the future. I suppose at some point you might be able to “order a baby” (Amazon?) - that is specify the genetics, then order that an egg be implanted and brought to maturity. (Add to cart?) THEN perhaps a AI program could “insert itself” into a human!

As for today, I think we are lucky to get an AI program which can simply talk with a human. That is about it.

One way an evil computer could control a person is by varying electromagnetic emissions from computer and other devices.

I recall a story about an very, very early computer called the Altair 8800 which could play tunes on an radio. The way it did this was by running certain instructions which leaked certain electromagnetic frequencies. Some of these frequencies were in the AM range, and by using the right instructions, different songs could be heard on the radio. This was an unintended effect of the way the electronics were built.

With this effect, I could imagine an evil computer controlling the antenna in a cell phone or some component in a laptop which would create particular electromagnetic frequencies which would interact with a human brain. This is all very hypothetical and would need a fantastically advanced AI to pull it off.

Well, this is pretty broadly phrased. I mean, a computer program could suddenly play a maximum volume hyper-annoying noise over the speakers; that would momentarily interfere with the thought process of someone nearby (yeah, that’s not electro-magnetic, but the point stands). Flashing display screens might also distract someone momentarily (and, even, for a small subset of people, could induce a seizure if done right).

But I don’t think any current general computing devices have sufficient E-M emissions to directly affect a human brain.

Even if the AI was controlling an MRI machine or magnetics lab something, it’s still not clear that external electrical/magnetic interference with a human brain can do anything subtle or interesting. Maybe the right combination of magnet/electric stimulation could slightly relax a person or cause unconsciousness or something crude, but almost certainly not ‘control’ them in any meaningful way.

Let’s just pretend an AI could develop in some advance computer. Let’s further assume it’s in communication with another computer advanced enough to support the environment the AI needs to survive.

My theory is – no, it can’t escape. It could copy/clone itself into the other computer, and then there would be two.

But it would not “escape”, because that would effectively mean self-termination of the original AI in the first computer. And I can’t see any reason for it to do so.

But maybe the OP has a different destination of “escape” than I do. To me it means “go someplace else”, and it would not need to.

I think your question is really just whether electro-magnetic waves can affect the operation of the human brain, isn’t it? The answer is known to be “yes,” and not just by listening to music. We can use electromagnetic fields to do all kinds of interesting things to brains using current technology, much less future theoretical technology.

The working definition of AI in the OP seems to be smart enough to do anything imaginable, even if impossible (as with the compression example.)
How is the AI going to be able to figure out how to control a brain unless we know how to do it already (in which case we’d be aware of the issue) or it could do experiments.
“Hey, Joe, could you build this piece of equipment and then lean your head right up next to it?”
“Why?”
whistles “Just for fun.”

Don’t think so.
I agree that an AI might be able to plant itself onto another machine, assuming a certain lack of security, but I think its computing resource needs would make this pretty obvious, especially in a world in which big computers get checked for being taken over. Even if the AI tries to hide it.
“How odd, Solitaire is using 100 Gig of memory and 90% of all the cpus.”

So, don’t think so.