Search on hardware design using genetic algorithms gave me mostly papers on the design of analog circuits, like this one. (PDF). I thought there was something using FPGAs, but I saw it a long time ago.
I don’t know if you are familiar with genetic algorithms, but they are an interesting research area. There have been GAs in my field, but algorithms and heuristics to directly solve the problem are always more efficient. Evolution isn’t fast!
I think I used a PDP-11 with core that was bigger than that. And semiconductor memories are more complex than core memories - but faster and more dense.
I’m sure you realize that at the heart of any digital circuit is a bunch of electrical signals, which used to look like square waves but now are very messy. Symbols depend on interpretation, but so do numbers.
I agree with you that intelligence won’t emerge from a bigger computer. It will need to be modeled, or it might evolve if the proper goals are presented to some sort of evolutionary hardware or computer system.
That might be true, but what the “something” is is the heart of the matter.
After all, art can be described as
- Take a blank canvas or piece of paper
- Do something to it with some sort of marking device
- Profit!
That applies to both a kindergartener with finger paints and Picasso. You’re saying they aren’t different.
No, I am describing what programs do. As I’ve said many times now, the lowest level of this logical view is the essence of what Turing-equivalent computation is, not the signals running through logic gates. This is the disconnect we keep having in this conversation.
I’m not “confusing” it, I’m making that statement outright: the size and speed of memory is one of the key factors, along with processor speed and the size and speed of external storage, that enables the growth of software complexity to the point that we begin to see synergistic effects.
As an aside, core memory was used well into the 1970s, and despite its high cost a mainframe might have a surprising amount of it for the time. A minicomputer like the PDP-8 might have only 4K or 8K words, but a PDP-10 (DECsystem-10) mainframe could have 256K 36-bit words in the KA10 model, each 32K in its own refrigerator-size cabinet, and the KI10 could have up to 4 megawords, though I never knew of any that had that much installed. Some IBM System/360 models could support up to 1 MB of core memory, and a few models 2 and 4 MB. That was the general basis of my statement that back in the days of core memory, the equivalent of half a megabyte or a megabyte or two was considered big.
Voyager,
Genetic algorithms are tough to use in the confines of a project with time and $$ constraints. They work best to get you out of a local minimum and, as you point out, it’s quicker to ‘just do it’.
I wrote a genetic program that used the coefficients of a Joukowsky transform as ‘genes’ to derive airfoils from the coordinates of a circle. For a fitness function I used a simple wind tunnel program. All members of each generation were passed through the wind tunnel program to determine the degree of success. As the program ran, the best result was plotted on the screen. It was fascinating to watch the program ‘grow’ an airfoil. Once I got over the Gee Whiz reaction, I analyzed what was happening. The GA was tracking the fitness function. Well Duh! That’s what a GA does. To get a novel result I would need to build a thousand wings for each generation and actually test them in the real world. My program was stochastic but the fitness function was deterministic. However the program did yield sections with predictable characteristics that I used on model flying wings.
David Goldberg published some approaches to using GA to create strategies. That’s a path that could yield results for true AI. It would be interesting to pursue but it’s really for folks who are way above my skill level.
This ignores that GAs are used in a lot of real-world applications, including engineering, with real time constraints and budgets. I cannot speak to your specific experiment without seeing it in detail, but it is quite possible that:
1 - Your encoding scheme wasn’t appropriate to the problem,
2 - Your fitness function wasn’t appropriate to the problem; and/or
3 - Your wind tunnel program was too simple.
As for GAs simply following the fitness function. I don’t even know what that means. I mean, that’s kind of the point of any fitness based search algorithm. Any fitness landscape that is purely a gradient descent doesn’t even need a search, the trick is when the fitness landscape is deceptive in any way. And this is where the different types of search algorithms function in a slightly different way to navigate a deceptive landscape. GA, at the most basic level, works by finding good building blocks withing the genomic structure and preserving them over successive generations. If there are no good common building blocks to preserve, then GA is not a good choice for a search algorithm (i.e. such a problem might prefer a swarm approach… and I’m speaking VERY broadly here. Picking the right algorithm is tricky, and probably 80% science and 20% art currently).
Not sure of your point. I just described a project and my analysis of it’s useful results.
My main point is that GA aren’t tough to use in a project that has time or budgetary constraints, as they are used with fair frequency in such projects.
My secondary point was to respond to:
I don’t know, maybe I misunderstood your post. I’ll write an AI to figure it out for me.
<nuclear launch detected>
Oh for the love of frak! That is NOT what I coded and you know it!
Needless to say, I agree with that, too, and “intelligence from a bigger computer” was not what I was saying. For the sake of clarity, I’m not saying that a sufficiently big computer or sufficiently complex software automatically acquires intelligence or some other human attribute, though that’s an unfortunate meme in sci-fi.
Intelligence has to be designed, at least in its basics, though it can be greatly enhanced by learning or, as you say, some other kind of evolutionary process. What I’m pointing out is the principle that a computational system of sufficiently great complexity will have fundamental qualities not found in a lesser system comprised of exactly the same components, just as in the analogy I gave between the brain of a roundworm and a human brain. A sufficiently complex system designed for a coherent purpose is greater than the sum of its parts.
The reason this principle is so important is that it addresses the criticism that has been leveled against AI by skeptics since the field was first founded, and which we’ve seen right here in this thread: the argument that since no intelligence is evident in the underlying components of a system – individual subroutines, threads, and processes, or even independent computer systems in a network – that there is therefore no intelligence in the aggregate system as a whole. This is a pervasive falsehood that fundamentally misunderstands the nature of intelligence and how it arises.
Never let the creationists hear you say that. ![]()
Creationists please note, I also left the door open to “some kind of evolutionary process”! It works well if you have lots of patience, say about a billion years or so. You also have to simulate natural selection by attacking, with a baseball bat, any evolutionary developments that aren’t heading for higher intelligence. It’s a lot of work. Much easier to just build the damn thing. ![]()
Wolfpup,
Thanks for the clarification.
My objection to the idea of intelligent numerical computers is not based on the lack of ‘intelligence’ in the basic component. What I have pointed out is that the aggregate is virtual. It does not exist as a physical entity. The aggregate is an illusion created by the observer. There is a single decision making component. None of it’s decisions deal directly with the context of the application.
Watson distributes it’s operations over an array of computers, but my objection still holds. The routines and threads are just lists. When Watson answers a question, it is not contemplating the result.
However, I do agree that numerical computers will emulate human intelligence. They very likely will pass the Turing test. But, that is not intelligence.
So, what would be the characteristics of intelligent machines? Would they exhibit a normal distribution of intelligence?
This is an idea that I wanted to add here at some point but I refrained myself because my voice lacks the professional tone and depth of the participants. My intention was to point out the amazing navigation skills of the box jellyfish although its nervous system consists of 9,000-18,000 neurons.
I hear neurons are more complex than the building blocks of the devices AI runs on. First, man made building blocks require way more energy to work than neurons. The more building blocks the more energy. If AI developers plan to create a computer that emulates human intelligence by snowballing the hardware, they may wind up with a system that will prove too expensive to power.
Second, neurons seem to be intelligent in themselves because (1) they are machines with a state and (2) they are endowed with an instruction set which they can send to one another. Neurons can learn from each other.
But artificial intelligence is not a reason for humans to worry. Computers lack will and voluntary action. The idea that computers will be made to feel something is a hypothesis based on faith because nothing in their current structure justifies this idea. Even if this can happen, its realization should be so far in the future that for most of us the issue may seem unlikely or irrelevant.
I beg to differ. You have repeatedly raised this bogus argument here, and here, and here, and most emphatically here where you said “There is no component or section of the computer that provides the kind of overview necessary for intelligence.”
This is (a) true, (b) absolutely irrelevant, and (c) betrays a fundamental misunderstanding about the nature of intelligence.
And I wish you would stop trying to disparagingly refer to computers as “numerical”, which is not a valid or meaningful concept in computer science and left even the popular perception after the 1940s. Computers are general information processors that operate on symbols.
This is just incoherent. It just repeats several times in different ways the strangely contradictory notion that intelligent behavior is not really intelligent if a computer does it, no matter how intelligent it actually is. All of that silliness has already been addressed several times and this repetitive line of argument is just no longer productive.
Here you seem to be confounded by a conflation with biological intelligence. Why would machine intelligence necessarily have a normal distribution? We have a normal distribution of intelligence because we’re biological entitities with normal distributions of many physical attributes. Machine intelligence will be a function of its algorithms and heuristics and the capabilities of the hardware it’s running on, and also of the experience – i.e.- directed or non-directed learning – that it’s been exposed to. But there’s no reason to expect the resultant distribution to conform to a Gaussian function.
I agree of course. But this meme is not just from sf. I was on a business trip when the 386 was announced, and The USA Today said that true AI was just around the corner thanks to all this power.
IMO what is missing is a model of intelligence which can be experimented with and implemented, not a lack of computing power. Where I used to work we had thousands of processors running all the time doing simulations. (It helps when you make them.) I expect that when we understand what we want to model, we’ll make custom hardware to accelerate it, but in 45 years of reading about AI we don’t seem to be a lot closer to having a model.
Nature has had hundreds of millions of years to optimize neurons.
My bet is that experiments with artificial neurons will involve building them out of hardware and then fabbing chips with hundreds or thousands of neurons on them. Inter-neuron communication will be tough, but the neurons will certainly have states and accept multilevel inputs. Making them slow enough will help with the energy problem. One reason modern chips run so hot is that when you get really small you start having current leakage issues, and relatively hotter chips run faster, which we thought of as good. Neuron chips won’t necessarily have these problems.
My understanding of Watson is that it comes up with a bunch of possible answers which it then evaluates by various strength metrics until it finds the best one. Kind of like how people often come up with Jeopardy answers. That may not count as contemplation, but it sure isn’t just coming up with an answer.
As for your second paragraph, how would you determine if a computer is intelligent besides a Turing Test or something like it? If you can’t distinguish a machine intelligence from a human intelligence, why call the machine unintelligent?
My fear of AI has nothing to do with runaway intelligences that somehow think that humans are extraneous to some prime directive.
My fear is AI that humans harness to do bad things and it gets out of control. My worry is ‘slaughterbots,’ not ‘Hal.’ You start programming machines to evaluate threats. Maybe in the beginning, those machines go to a person in between to ask for final confirmation, but sooner or later someone just decides it’s faster to skip that step. Imagine a situation like the Syrian Civil War only instead of bombs they start unleashing semi-intelligent swarms of ‘slaughterbots’ without regard for civilian casualties. Robots make war easy and more destructive. Right now, we know that there are people that don’t care about casualties. Sooner or later, those people are going to get ahold of AI’s and robot technology that will be able to cheaply kill lots of people. We’re well on that path now. Rebels from Idlib are launching drones at airbases in Latakia. Those drones are going to get cheaper and smarter and are we going to be able to contain them? How long is it really before the technology exists that you go to bed one night and the wake up the next morning and an entire country was executed in their sleep? We haven’t solved genocide yet and anything that makes it easier is a scary proposition.
I remember when the 486 came out, and in the news was “This is the physically fastest possible chip that can possibly be produced.”
I think I sat two cubes over from that marketing guy when I worked at Intel. ![]()