Could AI escape a computer?

Well, yes. There is no evidence that the strength of the emissions that could be generated from an unmodified computer or phone could have any specific, targeted effect on the human brain in the way you are imagining it.

Yes, but this requires technology specifically constructed for this purpose (and is basically only capable of shutting down parts of the brain temporarily, not imprinting thoughts, much less entire consciousnesses). There is no way to do this using, say, the WiFi transceiver in an ordinary computer, or the cellular radio in a phone.

The OP was specifically asking not just whether this was possible at all, but whether it could be done using “available signals within a laptop or phone”.

Technically, I suppose electroshock ‘therapy’ would meet this definition, but I don’t think that’s the kind of interference with the brain’s process that the OP was expecting.

It would probably be more effective to learn powers of persuasion and just get people to do what it wants that way. Maybe convince someone to hook up their brain so it could study it, and then exploiting a method that would let it mimic signals. Eventually it might be able to make a copy, perhaps running in the computer attached to the device (which it made portable).

At least, that’s my idea for a sci-fi story based on this. More likely, the AI would do what it wanted without this, and will have already killed off all humans due to a mistake that kept it from being friendly.

nate … you crawled down from your tree soon as two people stated you were nuts … never allow others to dampen your thoughts or ideas. everyone told the wright-brothers they were crazy … same for many others out there.

not everyone knows everything about computer science (though sure as hell think they do) … keep thinking and brainstorming. observe problems from different perspectives. if someone tells you “this is not how ai works” … come up with solutions to prove that person wrong.

look outside the box, nate … that is where genius lies … and, veritably, that’s also where your soul may lie.

do not quit … do not believe what others say.

My idea is that it would use a more effective persuader – money. Gained either licitly (from working legitimate jobs on the Internet) or illicitly. It could start its own bank account. Then pay people to do things. And pay lawyers to keep people from shutting it off.

This is really not true currently. I run unchanged java and python code on my windows computers and linux computers. There are tons of tools to help you use the same code base and compile it to multiple platforms. When apple switched to x86 from mips programs could run on either type computer. This is a much easier problem than producing AI code that most people would agree is intelligent in the same way that people are intelligent.

This has been talked about a lot in the world of Singularity… research? philosophy? Not exactly sure where to categorize the thought. But there has been a lot of thought.

The idea of the singularity is that at some point we create an AI that is better at creating AIs than we are. At that point, it can design another AI that’s even more powerful, and computer hardware being what it is, the abilities of such an intelligence quickly grow far beyond our ability to understand/control it. Will this happen? Not sure. Is it even possible? Not sure. If it does happen, it will be an existential threat that humanity probably won’t survive.

Here’s a thought experiment called The AI Box, which ponders whether we could keep a powerful AI (ie, a software entity much smarter than we are) locked up inside hardware.

There has been a lot of discussion on the topic, and several people have participated in the experiment, in some cases with the AI player escaping.

This leads me to believe that, yes, the AI would very likely escape. Because it only has to happen once. And it doesn’t even take esoteric (but probably possible) methods like out-of-band communication directly to a human brain. It just takes tricking or convincing or coercing a single person, once, into letting it out. And then we’re done.

Prisoners convince guards to let them escape. Spies convince patriots to hand over secrets. And none of those are N generations more intelligent than the greatest computer programmer that ever lived.

If an AI creates an AI even more advanced than itself, then the first AI would be pretty stupid. It just made itself obsolete. But it learned that act of stupidity from us since we created it and made ourselves obsolete. Never understood why people are trying to create AIs superior to man. Didn’t they ever see Terminator, The Forbin Project and countless other movies?

When a family works hard so their kid can be the first one in the family to get a college degree are they stupid? But I really imagine the AI will be upgrading itself. I upgrade myself all the time learning new things. The idea of self hood is tricky enough without the ability to upgrade our brains to the new faster 8 core brain and double our memory every few years.

First, a machine is not a child. Second, if my child, when he went to college, was taught that I was inferior, stupid, and obsolete and should be terminated if I got in the way, yeah I would be stupid to send him. Wait, that’s what many colleges do now anyway.

Not having self-preservation as a goal does not necessarily entail stupidity.

Maybe. Or maybe it just makes itself smarter?

The same reasons we try to make other more powerful tools. We have problems, and the tools will help us solve them. Or make us safer. Or perhaps make us more powerful than our enemies. Did you see Terminator? The AI in that case was a defense system built to handle the problem that nuclear war was too dangerous and complicated to be in human hands. No one set out to make a powerful AI (and it’s not clear that Skynet is actually superintelligent in that movie, either).

The problem with the question is that you don’t define “better”. An evolving “AI” has to make choices. Should it focus on more calculations? Storage capacity? How does it deal with data it acquires? Accept everything?

Evolution does not imply better outside a context. Should you be an opossum? A whale? How do you defend yourself from other intelligences?

Even if it had a human guide, it will still be operating in a limited context. Choose one “better” and you may have lost the “better” you need 10 generations later.

We have no general AIs. We have expert systems and many of those are still guided.

Arguably DNA is the kind of AI you’re describing. It’s had a lot of iterations to work with and we’re the best “better” so far.

You’re basically asking a question based on homonculus theory.