I disagree. To me, the only benefits of specialized hardware are the speed. Why wire a neural net when you can simulate it on the computer? And you can either simulate the flow of electricity through the wires, or simulate the activities at a higher level, or abstract it completely and use data structures that just mimick the layout.
The number of states is irrelevant, a binary computer is fundamentally equivalent to a 16-state turing machine, or trinary computer, or any other deterministic computing device you can imagine. Any problem you can solve on one can be solved on another.
As an example of hardware being irrelevant, my old Apple 2+, if it was connected to a large enough hard drive, could run Quake3, the latest graphics monster which can bring a current $2500 PC to its knees. It would take a LONG time, and it would have to render the graphics into files on the disk because it couldn’t display them, but it could be done. This is despite the fact that the Apple 2 had ~60k of usable ram and the smallest textures Quake3 has are larger than that. Quake3 also requires a floating point co-processor and a high-end 3D card. No matter. Everything could be emulated, including the 32bit Intel x86 instruction set, on that 8bit Apple2.
Similarly, an AI built for high-end specialized future hardware could be simulated on our mainframes, but more slowly, and could be even more slowly simulated on our PCs of today, or on our PCs of yesterday, just getting slower by a few orders of magnitude each time.
The only thing which isn’t equivalent is storage, everything else can be emulated. Even RAM isn’t needed, past a certain critical ammount needed to run the emulator, because information can be read in from the drive.
You and SingleDad discuss this for a while, not really reaching any conclusions…
My take on it is that quantum interactions are at the base of everything. A wire isn’t a pipe that an electron flows down like water; not only does it interact at every step of the way at an subatomic level, but no doubt every even smaller step of the way at a
quantum level. But we can still pretend for almost all purposes that electrons simply flow like water. The quantum effects are simply included in what we think of as subatomic interactions, and thus in macro interactions.
If you go small enough, electrons tunnel instead of flowing nicely in a line as we’d like, but this behaviour can still be predicted, after all, we’re not dealing with a single electron.
We don’t have to have a unified theory that explains why quarks do what quarks do to understand how the particles made up by them, at some level, work.
Thus, I believe that even if the brain depends on many quantum interactions, and can’t be perfectly understood without understanding quantum mechanics perfectly, that we could build a simulation that does what a neuron does.
After all, biochemical interactions take places many orders of magnitude higher than the quantum level. Fairly large doses of chemicals are involved, and large (comparatively) electric pulses. The larger an effect, the more easily modelled it is, because statistical analysis gets more accurate.
Anyways, I believe we could make a model of a neuron, without having to understand every tiny detail of it. If human brains are capable of withstanding fairly large fluctuations in neuro-transmitter levels, and working even with toxins present, or with chunks of brain actually missing, then I think they’re fairly robust, and what’s robust is often fairly easy to model, because it doesn’t depend on finicky details as much.
I have to agree with Joey here. Not because I think a sentient computer is less than human, but because I think it’s different than human. Human to me is a species name, not a condition of sentience.
But, a sentient computer IMHO would deserve human rights, which have nothing to do with the species, but have to do with the rights we deem appropriate for the sentient creatures we live with, who just at this point all happen to be human.
I doubt this is true, or ever would be true. Static storage has always been cheaper than network storage, and I can’t see why it would change. And if it did, I think you’d get static storage that was just miles of network cable and a tiny router, packaged in a little box, which you accessed in exactly the same way as a hard drive.
Putting data out on a network is never a great solution for privary/security or data integrity solutions.
And there are many ways to hide data locally. And for that, there are as many ways to watch network traffic. A rogue AI wouldn’t be better off using up all the network traffic to hide data than it would be using up all the local storage to hide data.
Certainly long-term storage could be handled by other computers. Any smart AI would diversify, storing multiple copies of important memories. In fact, a smart AI would probably keep a trickle-backup going at all times, so there was always another ‘sleeping’ copy of itself for backup purposes. (I know I’d do that if there was braintaping a cloning technology available… but that’s a whole new debate.) This network traffic would be mostly one way, for storage purposes, or non-realtime, with the AI running search processes through long-term memory, for information it anticipates a need for. Actually storing yourself offline completely means you’d essentially die if some idiot backhoe driver cut the network cables.
Quite right. In fact, an idiot AI in missile-command computers would be a pretty big threat.
A smarter AI might actually be less harmful, being better equipped to see the ‘big picture’ and understanding delayed gratification, whereby if it works with us, we’ll build more hardware for it, rather than trying to eliminate us because of a perceived threat.
No doubt. Even an ‘if-then’ statement, your basic AND-gate level sentience (or lack thereof) could kill us if the ‘if’ became true and the ‘then’ was deadly.
But, a sentient computer would be able to reason (n