Artificial Intelligence and evolution

There isn’t a single organism in history which can correctly claim “I am the first human.” All we see looking backward are things that become less and less human-like the further back we go, on a kind of spectrum with no clear absolute divide. Any creature who would claim to be human has a mother that looks and behaves the way the offspring does, without any particular characteristic that makes the offspring human, and the mother non-human.

I think the same will be true for artificial intelligence. There’s not one threshold that something can cross to be considered “true” intelligence, where everything on this side of the line is and everything on that side isn’t. We’ll continue to develop more and more complex programming and advanced hardware, with ever more clever ways to simulate intelligence, until at some point everyone will realize that we’ve been working with actual honest-to-goodness machine intelligence for some time now, and nobody will be able to conclusively point to exactly which moment in history it was so. Plenty of very smart people will have very persuasive arguments for a particular moment being the actual threshold, but they will not necessarily agree with each other and the topic will be controversial for however long it is relevant.

So what’s the debate?

I guess none, if you agree with me.

I presume you’re anticipating disagreement, and undoubtedly there are some Luddites around who will claim that “machines can never be intelligent” or try to draw some ridiculous parallel between intelligence and biological drives. But if your point is that intelligence is a continuum, then you’re absolutely correct, and we’ve definitely achieved many specific focused goals of machine intelligence already. I always find it amusing when these achievements are dismissed because “it’s just a computer with lots of memory running a program”. So says the organism carrying around in its skull a meat-based computer with lots of memory running a program.

But this thread seems to be about artificial intelligence and evolution. There is at least one difference between artificial intelligence and natural intelligence. Natural intelligence is the result of biological evolution whereas artificial intelligence is the product of human development. Artificial intelligence is not evolving in the sense that natural intelligence is.

This is going to sound kinda woo-ish probably, but the way I see it, it’s memetic evolution rather than genetic evolution. Artificial intelligence, being a research area, is subject to improvements by the proposal, acceptance, and fads in the AI/machine learning research and engineering community.

Replace intelligence with “consciousness” and there would be more room for argument I think.
(Can machines ever be conscious? Does machine consciousness exist on a spectrum?)

I think it is. Or at least, it will. Just because we are the selective pressure doesn’t mean it’s not a selection process every bit as valid as natural selection. Although it’s true that biological evolution isn’t geared toward a particular purpose, I would argue that artificial intelligence isn’t really a particular goal or purpose, itself. I think we’ll end up with artificial intelligence as a side effect of the things we’re really aiming for, like increased productivity, automated vehicles, market trend predictors, better video games, etc.

Yes, and yes! Why wouldn’t they, and why wouldn’t it? To argue that consciousness doesn’t exist on a spectrum is to argue that either 1) all human ancestors were conscious, or 2) there was a creature who was conscious, whose parents weren’t.

The mechanism for gradual improvement is irrelevant, only that there be incremental steps towards AI.

Agreed. It’s sort of the same way that the automobile or the airplane have “evolved.” They have gone through a long series of small changes, adding up to large changes in the aggregate, such changes resulting in increased viability of the model in question, in a highly competitive environment.

Cars and planes have “descent with modification” that is similar to the descent with modification of living species. The “inheritance” is seen in lines on blueprints, not via DNA.

It’s valid to call this “evolution.” I guess it’s also valid to be hyper-pedantic and say, “No, it isn’t.”

Outside of something supernatural I have a hard time thinking humans are unique in self awareness and what we consider free will.

By that reasoning, when should we start getting concerned that these increasingly intelligent things now used as thinking slaves might soon have, or already have some level of awareness?
Imagine waiting too long to give moral consideration to some now highly conscious AI, creating countless copies, and leaving them all to be controlled, altered or deleted without a second thought.

In my opinion, the distinction I’ve mentioned is crucial.

The issue is highly controversial for a host of reasons and the fact that intelligence is defined differently by different critics makes things completely murky.

My definition of intelligence does not favor computers at the moment because they’re not able to assimilate paradoxes and work out problems satisfactorily or to deal with completely new problems successfully.

Today’s economic context causes the development of artificial intelligence to be driven by profit. In contrast, natural intelligence evolves so that a species as a whole should be able to thrive. As a result, the stupidest bug on earth is more likely than the smartest robot to survive a natural disaster.

The rest is human vanity.

While I cannot disagree with your statement (who is the tallest short man on this site?) I felt that a threshhold had been crossed when I read about the go playing computer. Not only that go is harder to analyze than chess, but that it was done with a neural net with several layers and when it had finished self-programming (by playing millions of games against itself and using the results to tune the net) no one could say how it worked. This is so different from Big Blue that I feel a boundary was crossed. A lot of people agree with this.

How does one test a machine for self awareness?

How do you test me for self-awareness?

You’ve never met me in person; I could be a chatbot, for all you know; but while I’ve been posting here for years, nobody’s yet accused me of being too machinelike to pass a Turing test. If we ever get a machine that’s effectively indistinguishable from a warm and witty conversationalist like yours truly, then I figure we (a) would maybe have to shrug; and (b) would maybe fail to even realize a machine is passing our tests right in front of us.

<click . . . whir . . . beep>

But exactly what boundary was crossed? Was it a scale of decision-making complexity? An ability to optimize the intrinsic logic of a neural network for efficient analysis? Some kind of fundamental breakthrough in how computing logic can be performed by expansive neural networking? And how does that relate to what we know as ‘intelligence’ e.g. the multifunctional and highly distributed cognitive processes in the human (and other animal) brains.

I’m not sure that self-awareness, per se, should be considered a necessary condition for ‘intelligence’ by any useful metric. We can build systems that appear quite ‘intelligent’ in interpreting vague commands to match the user expectations, which can be seen by typing requests into the Google search engine or using an Xbox Kinect, but these systems clearly have no self-awareness or volition, and generally speaking, we would probably avoid building systems with a high degree of violation, and limited functional autonomy because for most practical applications we want or need to have some kind of a direct override to force an application or device to do what we want it to do rather than whatever its internal programming has it doing, e.g. a kill switch. I’m not certain we’ll ever build machines with actual self-awareness, or indeed, if we have some discrete process that makes us aware of our own cognition rather than a layered collection of processes of increasing sophistication which give us the impression of self-awareness and free will while still controlling our interpretation and decision making at a more primitive and fundamental level.

And if we could build machines with genuine self-awareness and autonomy, it’s not clear that we should, not because the robots might “take over” in some kind of deliberate revolt, but because both we don’t really know how to deal with nascent intellectual capability in an appropriately ethical fashion, and in doing so we may further render ourselves obsolete, losing some of the last remaining talents that distinguish us from our tools. The ultimate moral of the Kubrick’s existential horror film, 2001: A Space Odyssey isn’t that computers are dangerous or aliens are going to invade and destroy Earth, but that we will fail to understand the implications of the tools we create, and accidentally render them as more of a danger than an aid to ourselves.

Stranger

I’m not familiar with the details of the GO playing computer, but if the implementation was some kind of simulated neural net that isn’t really relevant, and I particularly object to notions that the implementation has to somehow parallel the operation of the biological brain. The objective measure of AI is purely behavioral, the implementation is irrelevant. And simple AI systems have had learning capacity of sorts since the 60s. Directed training was a big part of making Watson as good as it was, and whether you want to acknowledge that as “learning” or not, it did result in an enhanced capability about which one could reasonably say that “no one knows how it works” in the sense that it could not have been created that way by any known initial design.

You ask it. Of course a machine might be disposed to lie, so more accurately, you decide what your criteria of consciousness are and comprehensively test for them. I would expect to see autonomously developed insights and preferences, though probably very alien to our biological ones. A machine would likely be self-aware without having any hint of our powerful biologically driven survival instincts, for example, so it might have no problem being turned off for the night or permanently retired and used for parts. But this would be unrelated to its capacity for introspective philosophical musings. Perhaps the problem of disposing of self-aware intelligent AIs would not be opposition by the machine, but our human anthropomorphizing of them.

I believe this to be unrelated to the warnings about the dangers of AI we’re hearing from some of our technological luminaries. The dangers aren’t in the fact that the machines will seek to dominate us in some way, it’s in the fact that computers of all forms are already indispensable to civilization as we know it and becoming increasingly so. AI will just escalate this to a whole new level.

ETA:

Saw this after I posted. I agree, and I think basically it says much the same thing I touched on in the last paragraph, but you said it better.

The difference is that computers are built, not born. And generations of computers don’t spontaneously mutate and evolve on the factory floor. Each one is built exactly like the others of its model until the designers change the design or otherwise modify it.

So when a computer meets whatever definition of AI we decide on, we know when it was designed and built.