Artificial Intelligence and evolution

Aye…and there’s the rub. There is no authoritative “we” picked out to make such a firm definition.

However designs evolve. Most designs grow gradual with minor changes, just like living things. You see all sorts of things that aren’t too useful anymore but which are left in because it is easier than leaving them out. You see some infrequent revolutionary changes. (Which often crash and burn like the design I worked on at Intel.) And of course you see designs adapting to the environment.
Not that this matters much for AI, which will be more of a software thing anyhow. People have been advocating special purpose processors for 50 years, but it never works out.

Are you contemplating the day when an AI will be the designer that builds a better AI that will be the designer that builds a better AI, and so on?

It’s worth re-iterating several things that were already pointed out. The most egregious error here is in equating AI with hardware. Computer hardware is not and never will be “AI”, it just provides a platform for it. AI is fundamentally about the organization of information and the ability to process it to produce meaningful results – i.e.- software. And software can be intrinsically self-modifying and self-improving by directed training or non-directed learning. Secondly, it’s also a mistake to regard incremental evolutionary improvements in hardware and software design as fundamentally different from natural evolution. They are not spontaneous but the basic processes are much the same.

Yep! We already have a lot of computer-assisted design. We have lots of error-catching routines in CAD. The time will come (I think – I hope!) when computers take over the majority of the task of writing lines of code. Then Molly bar the door!

(Except I’m one of those who thinks our robot overlords will probably be better than our human overlords have ever been. Techno-optimism!)

(Harlan Ellison, of course, famously sketched out another alternative…)

Since I’m not an expert in either artificial intelligence or biological evolution, I can’t tell to what extent natural intelligence and artificial intelligence resemble each other in structure, performance and potency.

However, as I’ve mentioned before, they differ radically with regard to their final cause (as Plato would call it), that is their function.

This is the reason why the stupidest bug has better chances of survival than the most sophisticated robot. In my opinion, survival is a more complex task than winning a Go match and the overall intrinsic intelligence of a bug’s system surpasses that of the robot.

I think Google DeepMind is an interesting form of generalised intelligence…

Here it was a viewer/player of a randomly generated maze game and was rewarded for getting items:

It wasn’t programmed with 3D navigation - it just looked at the screen and learnt how the controls worked.

DeepMind explained:


The part of the video where it learns to play Space Invaders:

It involves general purpose digital neural networks that use reinforcement learning (learns with rewards such as “score”).

“the reinforcement that DeepMind uses is model free meaning it doesn’t need a structure or a set of rules to learn”.

It is a type of “intuitive” intelligence.

I don’t believe this analogy holds. What makes computers intelligent is not the same thing that makes humans intelligent. Consider a well known artificial intelligence like Deep Blue or AlphaGo. Both have managed to successfully defeat human champions at their games (chess and go respectively) and these systems can likely continue to get more and more advanced over time, but they’re still only better than humans in a single narrow field. There’s hopes that the algorithms developed in AlphaGo can be generalized. And that’s the primary difference. If you look at evolution, we can progressions over time where our ancestors become more and more like us. When we look at artificial intelligence, they’ve gotten better over time, but there’s no clear progression from intelligence in a single narrow field into general intelligence.

Rather, I think a better analogy would be along the lines of abiogenesis or civilization. Before a particular event, there were things that did really good jobs of approaching certain aspects of these things, but there’s still a point where something wasn’t alive and then is, or where there wasn’t a permanent settlement, then there was. In that regard, we have pieces of what what might ultimately become an artificial consciousness, but we’re not even really sure right now what constitutes either. We can look back and define life as having certain basic properties, but I think the key thing is there’s currently no real way for any existing code to evolve itself into something further, we need to create that basic framework first. I think from that point on we’ll potentially be seeing some form of digital life, whether it inherently leads to the singularity or not remains to be seen.

I also disagree with the idea that software and hardware are not linked as part of what constitutes life. Evolution is a process of both changes in form and function. Looking at more advanced forms of life, humans can only exist BECAUSE we have a sufficiently advanced physiology (brain AND body) to handle human consciousness. To a certain extent, human consciousness has reached a point where it evolves faster than our biology can keep up with, but we’re still constrained by the limitations of our biology. We’ve reached a point of, in a sense, meta-evolution, where we’re tapping into aspects that may have evolved by accident or in response to one stimulus and using it for another. We have some primitive forms of generalized intelligence, but unless we have hardware sophisticated enough to handle any potential needs a generalized intelligence may require to achieve something we’d recognize as artificial consciousness, it just can’t happen. That’s like assuming that a rat can suddenly learn to communicate in human language; its hardware just doesn’t have the capacity to achieve that.

I think that’s quite wrong. Even in advanced species like ourselves, most survival skills are instinctive or sensory – like the fight-or-flight response, or the acute perception of motion in our peripheral vision. A bug has no intelligence at all, yet its survival abilities as a species are sometimes remarkable, in part because of instinctive behaviors and because of rapid biological adaptation to changing conditions. That has nothing to do with intelligence.

I disagree. Mosier’s point is that intelligence is a continuum, and in this he is quite correct. At the lowest level you might have something that is a glorified calculator, at higher levels you have the manifestation of intelligence as an emergent property, and at higher levels still, I believe, one sees the manifestation of consciousness and self-awareness.

The argument about “general intelligence” being somehow fundamentally different from AI is bogus. It isn’t anything fundamental, it’s just another continuum. A system like Watson or its various spinoffs is in one sense quite general because it can deal with unbounded volumes and types of information, but in another sense very narrow because you can’t ask it to play cards or chess with you or help you compose a poem. So which is it? And the answer is that it’s not a meaningful question, and the really pertinent measure of any particular intelligence isn’t how “general” it is but how useful it is. And the answer is that in the speed, accuracy, and sheer analytical prowess of many problem solving applications, machine intelligence is very useful indeed because it so greatly exceeds human capabilities.

We’re probably very far away from having any kind of highly general human-like machine intelligence not because it’s in any way fundamentally different from the specialized kinds, but because there’s no particular need for it. There’s especially no particular need for a machine intelligence that does a great many different things but none of them particularly well – we already have seven and a half billion of those!

The analogy I would draw is that of robotics. If you told somebody in the 1920s or 30s that we would be sending robots to Mars and that robots would be extensively involved in manufacturing cars and many other things, or that when you call any large company you’d likely end up talking to a robot and possibly get all your business done without ever talking to a human, they’d probably imagine a humanoid creation with two metal legs, two metal arms, and two gleaming eyeballs. But robots are built to do specific jobs and do them exceptionally well, and nothing else, and they look nothing like that at all – and if they did look like us, they wouldn’t be very good at what they did. I think it’s instructive to see AI much the same way.

There will be such an evolution, but unlike the evolution of intelligence in humans we’ll set up milestones along the way and note when they’ve been reached. We don’t know who or when the first human used abstract logic to solve a complex problem but we’ll know when that happened in a machine.

We will? And who will be deciding what these milestones shall be?

It will be a matter of consensus at some point. There will be debate, but we can make reasonable conclusions about what was the first steam engine or airplane and we’ll do the same for artificial intelligence. We can’t say who the first intelligent human was or come close to determining that. Now we have observers who can make such a determination, even if it’s one you don’t agree with.

What I don’t agree with is the assertion that we’ll know AI when we see it, via some objective criterion of performing abstract logic, and there will be consensus on it. Many would argue that this has already happened and happened long ago (when chess playing programs became really good, for example, or machines like Watson that solve complex problems to satisfy natural language queries) while others like Hubert Dreyfus argue (incorrectly) that not only has it not happened, it never will.

It’s easy to forget today, when for a few bucks you can download a master-level chess playing program to your desktop computer or maybe even your phone, that for a long time it was considered a potentially insurmountable task for a computer to perform, for the simple reason that it was not and remains beyond the reach of simple brute-force evaluation. Many believed that computers would be able to play a decent basic game but would never be really good at it, because grandmasters themselves couldn’t explain how they did it. So it kind of became the poster boy for a “true” test of AI. Today it no longer impresses anyone. AI is an amazing instance of moving goalposts. It’s almost like “true AI” is continuously redefined as whatever is beyond where the goalposts are right now.

I think we’ll know when a breakthrough point is achieved. I could easily be wrong about that too. Chess playing as a measure of intelligence is rather old, chess playing computers don’t demonstrate the level of intelligence that will create a milestone, machines designed to play chess aren’t the same as a machine that can learn to play chess. So I think we’ll get that moment at some point, and it could take decades before the consensus arrives too, just like with the Wright Brothers. We’ll recognize that point because we’ll see the future evolution radiating outward from it. Or maybe not, in the future we could look back and see all development of artificial intelligence coalescing out of an amorphous cloud of technology, but I think human nature will lead us to name and identify a point.

Also, once we get there, the machine will be able to tell us the answer :slight_smile:

True…but that kind of process can’t go on forever. (Although we do still have people who believe the earth is flat…) There will come a time when the evidence is too strong to be denied.

It’s like the quip about fusion power always being twenty years in the future. Okay, yeah… But some day, we actually will have functioning fusion plants. It’s a quip, not an actual law of physics.

For many, the Turing Test will be the breakthrough/consensus point. When an AI system holds so natural a conversation that people can’t tell it’s not a person, only a few die-hard “no, it has to be biological!” hold-outs will be left in the denialist camp.

Everyone else will start worrying about AI civil rights.

Yes, eventually, but it’s hard to score a goal when the goalposts are running away from you so fast! There is this pervasive sense that anything that we create is somehow mechanistic and therefore inferior to nature and especially inferior to our self-celebrated exalted selves. It’s as if any machine that exhibits intelligence is really doing so only because of clever and elaborate “tricks” we’ve built into it. As Marvin Minsky once said, when you explain, you explain away – that is, when you explain how something was done, even in a high-level conceptual sense, it somehow becomes reduced to the mundane, to “a simple matter of programming”, and loses all its mystique regardless of how wondrous it really is. And somehow this remains true even when part of the explanation is that it spent months learning, and the machine now operates on a model of the world that you never built and no longer understand yourself.

I have great respect for Alan Turing and his accomplishments and the sad way his life ended, but I’m not sure if the Turing test is really all that useful any more. Now that we have natural language parsing engines and incredible online information resources it wouldn’t be that hard to create an AI that could quite easily pass the test without being any sort of demonstrably useful intelligence. As I said earlier, I think “usefulness” rather than either generality or “humanness” is the most important determinant of true AI.

And, speaking of useless, I wonder if Microsoft will bring back Tay? :smiley:

But, again, isn’t that kinda sorta already built into the Turing test?

Because, again, how do we establish that I’m intelligent?

Sure, like Watson I could compete against you in a natural-language trivia contest; and I may well win. And I could maybe carry on a seemingly human conversation with you while beating you at chess. But if that’s not enough to establish my bona fides with you, if you suspect I’m just a chatbot that can do a few tricks instead of an intelligence that’s significantly similar to a well-educated human – if you won’t sign off on my intellect until and unless you can ask me to do something useful – well, then, (a) do so, and (b) either I come through or I don’t, right?

So, go right ahead: for all you know, I’m a chatbot with no useful intelligence; or maybe I have an exquisitely useful intelligence. Can you put me to the test?

There was an article in Scientific American, some while ago, that argued that AI will be achieved when a conversational system can cope with weird context. Not only idiom – “When pigs fly!” – but, for instance nderstanding entences ike his ne. You did it easily; when an AI does this, it will have taken a huge step toward true intelligence.

Heh. I just typed ‘nderstanding entences ike his ne’ into google, and it asked whether I meant ‘understanding sentences like his ne’.

So while I get your point and it’s not there yet, I figure that’s a pretty good start.

yeah, Google…wow!

I think my Chromebook blushed when it “saw” your example…:wink: (best 300 bucks I have ever spent)
We humans have quite the range of “useful” intelligence…I think AI is similar, over the last hundred years… and perhaps will aquire a “strong” sense of self…ie consciousness, someday.