This morning I’m getting overwhelmed (frustrated) with, “is AI ready to take over ATC?”; Paul McCartney talking about AI taking over music; the creative process, “SDCs”, etc.
At what point is biology going to be replaced with silicon? If AI can’t solve or perform something, then quantum computing in combination with AI should? What’s a plant or animal ultimately supposed to do, just breed?
There is a lot of silicon. And a good amount of carbon. Silicon will be mostly gone when we go back to cows and pigs and horses and have left technology behind. We will keep iron, to nail together sailing ships and see the world. Silicon is not useful at that point for anything but window glass.
Biological processes can be very efficient and are probably going to replace silicon. Some of us will be in the new biological systems. They just won’t be haphazardly created like we were.
At least we can boast that, as lifeforms, we’ve risen to the level of the diatoms: extracting silica from our environment and using it to prolong our otherwise unsustainable lives.
IMO the AI revolution is the third great human technological revolution.
The neolithic revolution was the first when we developed agriculture
Then we had the industrial revolution when machine replaced biological muscle (both human and animal)
now we have the AI revolutno when machines will replace human biological cognition.
Its not one black/white turning point. Ever since the 18th century (arguably earlier due to things like windmills and water wheels) humans have been using machines to replace biological muscle since the machines were vastly superior. Calculators have been better at basic math than humans for decades.
I feel we’re in a period where every year more and more tasks that used to require biological cognition can be done with machine cognition. But simultaneously, we are also developing bipedal robots that can do pretty much any physical labor that you need a human to perform.
I think its a safe bet that robotics and AI will vastly outperform biological humans by the end of the century. Probably much earlier.
What happens to us then? Who knows. Horses didn’t move onto more fulfilling jobs after the automobile came along. We just stopped breeding them. In 1910 there were about 28 million horses in the US. By 1960 it was down to about 5 million horses. We just don’t need them anymore.
Ideally? UBI where the government heavily taxes industry to fund the program.
But this being the US, we will probably get cyberpunk fascism and elysium.
Thing is, electronic devices are non-proactive. They lack motivation. They only respond. The creative process is not really something that can be instilled into a machine. AIs lack insight, intuition and most importantly, inspiration, and it is not evident that they can ever acquire it.
Objectively, speaking, nothing is “supposed” to do anything; things just happen. Purpose is a human invention, and it can be whatever we want it to be.
Nowhere near the present, despite the hype so-called "AI’ isn’t even close to being able to replace people. It’s just that corporations are determined to try to do so anyway since they utterly despise having employees they are required to pay.
It depends on what areas we’re specifically discussing. If we’re talking about brute force calculation, then AI has already surpassed us. That’s why the best players of strategy games like chess, go, etc. are computers. But if we’re talking about having consciousness, awareness, and so on, in other words “being a person”, then it seems computers are still very far away. I think many animals are closer to having the things that make one a person than even the smartest computer.
If evolution by natural selection can unintentionally make consciousness out of dirt, then eventually humans will be able to make consciousness out of silicon (or carbon nanotubes, or whatever).
I think what truly matters is competence. How competent is AI at understanding situations and solving problems? AI seems to be advancing rapidly at being competent at understanding and solving problems.
An AI doesn’t need to be sentient or conscious for it to be useful to helping humans solve their problems. It just needs to understand how reality works, and understand how to help humans achieve their goals. The same way a car isn’t conscious, but its vastly superior to walking.
Ascribing intention / indifference to natural selection is wrong-thinking. The development of consciousness is an almost inevitable effect of the process. Consciousness arises out of the survival instinct: beings that lack a survival instinct are less likely to live long enough to breed. There is nothing deliberate about it.
Natural selection can’t see past the current generation, while conscious humans can see well past the current generation. Also humans have a much wider range of elements and materials to work with while biology is mostly limited to 6 elements. Natural selection can’t take a short term loss for a long term gain, while human technology can.
Also biological brains have to exist within a very narrow range of factors like pH, temperature, skull size, energy supply, etc while human technology does not have that. Human technology can be redesigned from the ground up while evolution requires building onto existing structures.
My point is that if evolution with all its limitations can create consciousness, then intelligence should be able to create it too and on a much quicker timescale. Also I’m sure intelligence will create consciousness on levels that we can’t comprehend.
Also survival instincts have existed long before consciousness. I am not sure what consciousness is, I don’t know if anyone knows. But if you assume it arises out of the neocortex then its only about ~200 million years old. Brains themselves are ~600 million years old, and life forms have had survival instincts long before brains and cortexes evolved.
Natural Selection is not a thing. It does not have goals or motivations or deadlines or anything. It is an effect. It does not do anything or see anything or want anything, it is just what happens to happen. It is in the same category as earthquakes or gamma ray bursts or global climate trends.
Except,
Which is a major problem. Consciousness, for all we can tell, probably involves a complex interplay between neural activity, biological function, hormonal influence and maybe some additional thing that we have yet to identify. How we could create a thing that we do not even know what it is is hard to even imagine. We might be able to simulate something that seems to be consciousness, but could we even tell what it was?
Why would you assume that? I consider that assumption unfounded.
Probably. Possibly. How can we make that assessment? Biological function existed long before olfactory perception or social behavior patterns or sexual reproduction, but it led to those things. I am merely saying that the survival instinct is the foundation of consciousness, not its equivalent.
And yes, we could program a machine with the equivalent of a survival instinct, but I suspect that consciousness is quite a bit more complex than just that – it is the foundation, not the edifice.
First, we don’t even know if it’s possible to create a human level mind without consciousness. The fact that we are conscious implies it’s likely the default with a human level mind. Far from struggling to create a conscious human level AI, creating an unconscious one might well be the bigger challenge - or outright impossible. We don’t understand how it works in the first place, so we don’t know how difficult it is.
As for being unable to tell; we can’t tell that about other humans, either. How do you know that anyone but you is conscious? Answer: you don’t.
We should just assume it is if we can’t tell the difference since doing otherwise is a small step from deciding that other humans we don’t like “aren’t really people”. At least until we understand the mechanics enough that we can basically just say “No, Consciousness.exe isn’t installed” and leave it at that.
A survival instinct does not entail consciousness; no reason why a p-zombie can’t have a survival instinct, or at least behave in a way indistinguishable from having one.
That said, one of the leading hypotheses for why consciousness evolved, aside just being a side effect, is that is allows for more considered decisions. e.g. damage to your body is absolutely something you try to avoid, but having a negative subjective experience of pain, rather than something akin to a knee-jerk reflex, means you can choose to put yourself through it, if you are trying to achieve a greater goal.
Which is not what I am saying. “Arises out of” does not mean that it is the entire substance. I am saying that the survival instinct is the origin of consciousness, not its substance in toto. The survival instinct is a significant component of consciousness, but there are other components.
Consciousness would not, e.g., exist without some amount of ratiocination, so part of it must be rooted in neural function. But it is not itself a matter of computation, and, in fact, we cannot determine whether it is even a direct participant in the thought process or little more than a spectator. It seems to be a sort of singularity, with an indistinct event horizon, from which nothing actually issues beyond what the outer mind infers.
I was responding to your point that “Consciousness arises out of the survival instinct”. I don’t see any reason to suppose that (although, at I say, non reflex goal oriented behaviour might need subjective experience, but that’s just a hypothesis and a bit more involved a concept than survival instinct).