View Single Post
  #807  
Old 01-26-2018, 08:57 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by Tripler View Post
I offer that "when" humans are "converted to a computer" is completely dependent on the more pertinent question of "if". If you differ, please make your argument.

Tripler
An open discussion, SamuelA.
Oh. That's simple. This is right up your alley, even. The base subunit in your brain does the following about 1k times a second : a electrical signal arrives at a synapse. Mechanical vesicles dock in response and dump a neurotransmitter into a very narrow gap. Diffusion carries the neurotransmitter across, and an electric charge is added or subtracted from the receiver.

This is the same thing as Receiver = MAC(Sender). Branch(Receiver)

We can, right now, today, trivially make computer chips that do this fundamental operation in 1 clock cycle, and run at ~2 ghz while doing this. Most modern GPUs run at between 1.2 and 2 ghz, and contains thousands of hardware subunits doing this very operation.

You need not thousands, but trillions - a vast data center crammed full of custom chips that would resemble a GPU in some ways - but you could actually build a machine, if this were a Manhattan Project style effort, that has the same scale and scope as a brain.

The reason this is up your alley is the biggest weapon on the planet isn't nukes, it's the human mind that allowed us to bang rocks together until we had nukes.

While you have to actually program a computer that has the same physical capability as the brain with the algorithms that make it sentient like the brain - a far harder task than building the raw hardware, which is why we have not yet done it - when that problem is solved, this would be roughly the same relative advance as going from conventional to nuclear weapons.

A machine mind that runs at 2 ghz would be 2 million times quicker, give or take. It would make a nation that had just one, with the same capability as one human but 2 million times quicker, unbeatable given time to take advantage of it.

You know the idea of a Gantt chart, right? The key idea here is that all complex projects, whether it be making a new jet fighter or anti-ballistic missile or some other strategic level weapon are limited by a single "critical path" of steps that must be done. You can put the best people in the world on that path, and work them 16 hours a day, but it still is going to take you years to decades to develop to a deployable state a major new weapon.

So if you had a super-AI that could do the key process steps and get you new prototypes in hours, where you just have to wait for them to be automatically fabricated, you could compress that timeline down to probably months per generation of weapon. You'd do similar compression steps for developing factories to build you more computers so you can have more AI nodes, factories to make you more factories, and so on.

The logical thing to do would be to develop enough defense capability against nukes that you then start a world war and beat everyone else. A few nukes getting through your defenses won't knock you out because the only thing that matters are these self replicating factory nodes and AI nodes, and just 1 of each has to survive and they can copy themselves.

All the logistic problems with invading every nation on earth at the same time and controlling every surviving human after you win go away when you can do it all with machine intelligence.

This is one scenario. There are many others. But the lure of it is very, very tempting to a lot of nations for national defense reasons.

What are the possible reasons that this won't happen? Because it will unless something incredible happens.

a. A nuclear war ends civilization first
b. It turns out that human beings have mystical 'souls' that provide us our sentience
c. All the major powers agree that AI research is too dangerous and refuse to do it and nobody cheats and everyone honors the agreement and a world police force is formed to inspect all nations.
d. It turns out that the problem is too hard and you can't just write an algorithm you can describe in a few pages and kick the ass of any human alive at a well defined task. Oh, whoops, you can.
e. It's going to take so long that you and I will both be dead of old age first.

Most board members who think about this probably just assume (e) is the answer, to be quite frank. And I can't deny the logic, progress on this seems to be accelerating dramatically but I can't say if it's going to continue accelerating and we hit machine sentience before 2030 or not.

Last edited by SamuelA; 01-26-2018 at 09:00 PM.