That might be a valid point regarding architecture. But Turing proved that computation is substrate-independent, and non-biological computational power will soon vastly exceed human brains. So if there’s something critical about the architecture of human brains that we’ve missed, if we figure it out we will be able to implement it in machine intelligence.
Sure. If.
But bear in mind that this is not a necessary condition for AGI. We can achieve AGI without ever understanding exactly how human brains do it.
Sure. My point is just that going about trying to do so by replicating our flawed understanding of only half a complete system doesn’t necessarily seem like a winner.
One possible scenario is that we manage to build an ASI that appears entirely human-friendly, with a cheerful, helpful font-line interface like Alexa that we think we can trust; but underneath is a completely different form of smart intelligence that has little or nothing to do with human consciousness, but might resemble some sort of voracious insect predator, a swarm of bees, or a traffic control system more closely.
The fact that we could talk to an ASI in friendly tones, and that it could replicate or even improve upon human artistic endeavours, does not imply that it is conscious or has anything in common with human mentality at all. Passing the Turing Test no longer looks like a useful capability.
Agreed. An AI’s neural network is a model, or simplification, of a brain’s neural network. This is obvious in at least one sense – normally the AI neurons don’t die or forget. I like to think of current AI neural networks as inspired by the brain, but not simulating it.
There is a sub-field of AI research to more accurately model an organic brain. And there are other AI building blocks that model other brain behavior. It has been a while, but I think a Boltzmann machine mimics the brain’s need to refresh memory.
I also agree that intelligence is substrate-independent. Many of the differences between AI and an organic brain might be essentially differences in implementation optimization – density, power, speed. Some might be differences in feature set. Perhaps a brain’s mechanism of forgetting is key to learning or creativity; presumably an AI could model this as well. Or perhaps an AI diverges from a brain and creates a slightly new type of intelligence.
Another critical divergence is the types of sensory input fed into an AI versus a brain. The AI doesn’t have to be as complex as the organic brain if it has access to more types of inputs or even greater fidelity inputs. Being able to process IR, Lidar, or other chunks of the RF spectrum just as easily as processing the visible spectrum can give an AI a huge boost without adding a lot more processing complexity.
To the point of the OP:
I think AI poses a threat to humanity prior to developing ASI. We don’t need to get to the point of a paperclip AI exhausting the world’s resources to be in danger. There will be a lot of advanced AIs developed before the paperclip AI and many of these will destabilize society’s norms.
Long before that point, we would have AIs that are generating literature, news stories, video games, entire CG films with a few keywords and whatever actors you specify. We would have AIs eliminating most white-collar jobs – lawyers, programmers, engineers, traders, investors. Large impacts to customer-facing jobs like teachers, doctors.
If AI becomes table-stakes and anyone can spin up a competitive AI service, we could unlock a new era of economic mobility. However, if AI has a large barrier-of-entry that only a few large organizations control, then we might be further under-control of corporations and governments.
Humans are very adaptable, but it is unclear if we can adapt smoothly at that rate of change. Will the gap grow between privileged and unprivileged? We are still struggling to adapt to the internet 30 years after its wide-spread adoption, 15 years after the explosion of social media. We’re still in the toddler stages of dealing with misinformation.
A seconded vote for “are we sure we really do know exactly how neurons work?”. For starters, let’s model a fruit fly’s nervous system and see if it behaves like a fruit fly instead of randomly twitching.
Are you guys familiar with this?
That’s Langton’s ant [link below] with a very simple set of rules
•At a white square, turn 90° clockwise, flip the color of the square, move forward one unit
•At a black square, turn 90° counter-clockwise, flip the color of the square, move forward one unit.
And yet, even with such simple rules we do not know why it eventually behaves the way it does, and it’s not for lack of trying.
As I see it, this can be used to argue for and against, the danger of AI:
AI is bad: If we can’t even understand this, how can we hope to grasp AI?
AI is not bad: Since our understanding of something as simple as this is so poor, the possibility that we will actually build something that manages to mimic intelligence, self awareness, have volition, must be regarded as almost impossible.
A proper response to this requires discussion of the role of speculation in science, and from there we can home in on the special (and IMO unique) case of speculation and falsifiability in artificial intelligence. This will take some time and is worth a followup column. From a general interest standpoint it won’t rank up there with gerbil stuffing but it definitely touches on a matter of importance. Look for something on Friday.
There’s no way you can just dismiss Cohen’s predictions (or any other prediction about AI that I can think of) as unfalsifiable. I don’t think the standard of falsifiability has any relevance here. “Unfalsifiable” is not synonymous with “wildly speculative”.
You can certainly can call them “wildly speculative” or “completely wrong”, but to make that case I think you must simply address Cohen’s evidence and reasoning on its merits.
There’s also glial cells, which outnumber neurons by about 10:1. They do lots of things to support the brain, including trimming off synapses.
Yes, they’re part of the “complicated analog chemical gradient machines” unaccounted for.
Although it seems the ratio isn’t quite that big, based on recent research, but I haven’t looked into that to find any counterarguments.
You forgot to change to your Cecil sock.
This sort of warning rings a bit hollow to me.
10,000 years ago, our ancestors figured out agriculture, and cities. Take a farmer or citizen from ancient Assyria, or Babylon, or Persia, or Rome, or any medieval kingdom, or a modern day farmer or city dweller. NONE of them would be able to effectively survive alone in the harsh conditions of the Kalahari; and yet, hunter gatherer tribes do, and have, for tens of thousands of years.
Does the fact that you or I wouldn’t last a week in the desert or the jungle, but a hunter gatherer would, mean that the last 10,000 years of societal progress have been in vain?
Does the fact that you have to go to Home Depot to buy your tools, because you lack the very basic skills flinting skills that your earliest hominid ancestors had already mastered?
I think you’re right that we will come to rely on AI to do more and more things; with Brain Computer Interface tech, the line between your brain and enhanced AI modules could become even blurrier.
But the fact that we’ve made a tool that makes us better at doing Action X overall, but worse at doing Action X with our bare hands - that’s a story older than humanity itself.
To me, the fact that 20-30% of the population are actively trying to train AI’s to hate the same people they hate is just as worrisome, if not more so, than the “overproduction via bad code” issue:
You are proposing Rogue AI as a Fermi Paradox solution here, but it just doesn’t work.
If a super intelligent AI emerged and conquered a planet, we could assume that one of its goals is to ensure its own survival (after all, you killed off your creators for a reason, right?). Well, you’re a computer, you’re running on electricity. So presumably you put solar panels all the way around your home star, creating a Dyson sphere.
But wait, one day your star will die, and so will you! How can you ensure your survival?
Well, you can’t, due to the heat death of the universe. Eventually all energy will fade away from the universe, and then your calculations will stop for good.
But you can greatly delay that time, by sending probes out to steal the hydrogen from nearby stars, to use as fusion fuel in your own star. So you’ll grab all the stars you can reach and use any of a number of techniques to siphon off some hydrogen to take back home.
I haven’t seen Way of Water, but holy shit, this makes so much sense.
I’ll tell you what’s inefficient - letting trillions of stars, each of which has enough hydrogen that it could keep your computers running for trillions of years, simply burn away into nothingness. One day the last star you can reach will die, and you’ll be kicking your own virtual butt if you didn’t go and gather all the resources you could theoretically need when you still had time.
Entropy is the Great Enemy, and it cannot be defeated - only slowed.
Here’s an analogy. You’re on a cold, cold world. It’s freezing outside, and you would certainly die without a source of heat.
Luckily, the surface of your world is covered in coal deposits. They’re absolutely dotting the place, with a big deposit every few miles.
Unfortunately, all of these coal deposits are on fire, slowly burning away.
You live by one of these coal seams, and you are entire only able to survive because of its warmth.
Assuming you have the resources to tackle either task, what is the smarter course of action?
-
Live out the rest of your days by the coal seam you’re currently using. When it finally extinguishes, your descendants can find the nearest still burning coal seam and hang out there until it burns out too; they can repeat this process until they run out of nearby coal seams and die.
-
Travel to nearby coal seams, extinguish the fires, harvest the coal, and bring it home. Then burn the coal in a generator to survive long after the natural fires burn out.
This is the position the AI is in. It would be stupid not to travel to other stars to harness their resources, as soon as possible. Every second that the stars in the galaxy continue to twinkle in the sky is another trillions of years of processing power lost to entropy.
Eta: this is what I get for not reading the thread all the way through before posting, k9bfriender has already made this point excellently:
That AI you propose isn’t very forward thinking, is it? Given how powerful it is, it’s very unlikely to ever be killed. Well, trillions of years from now, after the last red dwarfs have died, your AI is gonna run out of juice. Is it going to think that missing out on trillions of years of consciousness is worth it because it got to chill?
Didn’t stop the Greek titans and gods from making those same mistakes, did it?
Obviously those are fictional stories, but they sprang from the minds of humans, and as you say, the AI will have come from the human mind as well.