The problem I have with the singularity

Unspoken if’s? Really?

Let’s briefly skim my post…

“Every discussion about AI is built on the first big if.” “All of this is still built on a very big if.” “Of course, we don’t know if we have the tools to start the evolutionary process you talk about.” “Assuming the evolutionary process works – and that’s a big if – it will continue to work well past human intelligence.” “If we can develop a human-level intelligence in human time frames, rather than the geological time frames that it took to develop us,” “I’m skeptical we’ll crack this nut anytime soon, but…”

The word you claim to be “unspoken” appears at least 15 times in my post. I’ve had people ignore what I say before, but you probably just broke the record.

Here are your own words. “I do think we will eventually create AI’s of human or slightly above human intelligence.”

You are complaining about “unspoken ifs” that not only were not unspoken, but some are actually points that you fully agree with. You pounce so eagerly, you end up attacking your own arguments.

At least you’re right that Blake’s post was good and on target.

Good argument. I don’t disagree.

My post was obviously unclear, but I was trying to talk about a “theory of general intelligence”. That is to say, understanding the underlying principles by which the most basic machine intelligence might be engineered. I don’t mean full understanding of its own current processes, which as you say wouldn’t work. To jump off another point you raised:

Great analogy. Let’s shift from animals back to computers, back into the world of big ifs. Suppose we have a general intelligence, Version A. From Version A and some evolutionary process, eventually a Version B is developed which is similar but noticeably better.

Version B wouldn’t be able to tease out the difference between itself and Version A. But time passes. New versions are developed. This might be more hardware than software improvement, but however it happens, improvements are made. Eventually, given enough time, a Version Z would be developed – the beneficiary of many long years of hardware advancements – and if it has enough processing power, it would eventually be able to tease out the underlying reasons why Version B’s software is superior to A’s. Which is to say: it is fully plausible that it would be able to engineer an intelligence that worked on Version B’s original hardware, but which was more efficient than Version B. Call it Version B-Prime. Version Z would be a machine intelligence that could engineer new machine intelligences from scratch.

This is what I mean by “understanding itself”. Not full understanding of its own current processes – which would be impossible, as you rightly note – but understanding the basics of machine intelligence, the family to which it belongs. It would be able to decipher the principles that underlie the differences between Version A and Version B, and the differences between them and their predecessors.

We might even imagine that the primary difference between Version Z and Version B is the hardware they run on, more than the software. That’s another if, and maybe a big one, but it’s not unreasonable. We can’t link ten thousand human brains together to understand the difference between a singleton human brain and a cat brain… but computers might just manage to do that. So if Version B-Prime were given the same hardware that Version Z runs on, it might just be an absolute improvement in efficiency. Hardware improvements that are powerful enough might conceivably lead the way to software improvements, which would make subsequent hardware improvements even easier and more likely. We don’t know the limits to this, so it’s easy for our imaginations to run wild and visualize the process continuing strongly for some time. This is where I think the “singularity” people mostly fall.

Obviously, I don’t see this sort of self-reinforcing process as inevitable. But still, it has a certain plausibility. At very least, I think it’s fully defensible that eventually a Version Z would be created that could understand Version B, even if nothing practical could come from that understanding. And I do think that sort of insight qualifies as a form of it “understanding itself”, especially if the main difference is massive hardware changes rather than software.

I thought that was a really poor argument Blake made. As Matt Ridley is fond of saying, no human being can build a computer mouse. When you factor in all the information and skills to get all the parts, no one human can do it. But you get thousands of humans all doing their part and you can build one.

On a similar vein no one human could build strong AI, but endless thousands of humans working both independently and together could build one. So it doesn’t matter if one person can’t build one, you can have many each doing a small part.

Even so, if/when we build machines whose cognitive abilities are better than ours and whose hardware is easier to upgrade, it is pretty specious reasoning to assume because no human can upgrade their own brain that that would apply to machines we create being unable to upgrade their own hardware and software.

No, the argument isn’t countered by an appeal to emegence - the point is that their are no components to even that type of computer that aren’t added by the designers, so the AI will know what components and software went into making it. If full understanding isn’t required by humans to build the AI, it isn’t required by the AI to reproduce them, anything else is special pleading.

Are you saying the creators of Watson don’t understand how its components function?

I would really, strongly advise that the first AIs are isolated in this way until we know they are safe. And we might never know that they are safe. We could provide them with as much information as they desire, maybe with a few exceptions (such as rocket launch codes); but we should avoid giving them the ability to affect the world directly.

A popular entertainment among futurists appears to be the ‘AI-box experiment’; a player takes the role of an isolated AI, who must attempt to persuade another player (the Gatekeeper) that the AI should be released.

One way to gain release might be to demonstrate that you (as an AI) are essentially as human as anyone else, and deserve freedom on that account; this might be a bit tricky, but someone would eventually fall for it, I’m sure.

I think I can address all of that by addressing that last question, which implicitly makes several incorrect assumptions. First of all you seem to be using “components” interchangeably with “the system”, and the fundamental distinction between the two is precisely my point. Of course the people on the Watson project understood how the individual components they were responsible for worked. That does not in itself mean that they could in any meaningful sense predict how the system would work. They could not necessarily know how good it would be, nor indeed might the system’s behavior necessarily even be deterministic at all, at least no more than any other autonomous intelligence. Because the system could be growing its knowledge base and continuous optimization could be enhancing its decision-making strategies, so that eventually the system acquires a level of performance (and perhaps a set of behaviors) that was not in any sense predictable, and that exceeds both the individual and the collective capabilities of its creators.

In what meaningful sense can such a computationally evolved system be said to be “understood” by the people who developed the original components? What can they reliably tell you about what it’s going to do next?

This is the same argument as people saying “google has so much info it’s going to turn sentient”.

Watson doesn’t work that way and the engineers certainly knew that the many hand-crafted algorithms stitched together in sequence to create the final sequence of data gathering and inference was not going to suddenly reprogram itself or add a new step in the sequence.

No, I’m very carefully not.

Nor would a self-extending AI need to, any more than a human extending the same set of components.

Point being, complete understanding isn’t required for expansion or enhancement to proceed.

No, it is not. Nor would I ever make such an absurd argument.

I’ll start by saying that I seem to recall having discussions on related subjects with you before and if I’m remembering correctly (maybe I’m not) you’ve firmly taken the side of those opposed to the computational theory of mind. In any case let me say that it’s well accepted in cognitive science, though it does have its opponents, like John Searle, who I mentioned here and here. Do you know what I mean by a computationally evolved system? Many of the points I’m making are the basis of the discipline called computational intelligence. If you reject those principles I doubt we’re going to resolve them here.

As for Watson and how it works, I was speaking generically about computational intelligence and wasn’t suggesting that Watson necessarily meets all the criteria of such a system so I don’t want to belabor it, but I mentioned it because it has some of them and has been very successful. It seems odd to claim that “Watson doesn’t work that way”, as if somehow everything that it did was all neatly pre-programmed like the algorithms of a tax-filing program. IBM in fact is at pains to make the point that in many ways the DeepQA engine does work just the way I described. Herewith a few quotes from Building Watson: An Overview of the DeepQA Project, Ferucci et al. (2015), AI Magazine – emphasis added by me. In this discussion “TREC” refers to an annual text retrieval conference and competition sponsored by NIST and the DoD that provides benchmark challenges for QA-based retrieval systems.

What is far more important than any particular technique we use is how we combine them in DeepQA such that overlapping approaches can bring their strengths to bear and contribute to improvements in accuracy, confidence, or speed …

Jeopardy demands strategic game play to match wits against the best human players. In a typical Jeopardy game, Watson faces the following strategic decisions: deciding whether to buzz in and attempt to answer a question, selecting squares from the board, and wagering on Daily Doubles and Final Jeopardy … These challenges drove the construction of statistical models of players and games, game-theoretic analyses of particular game scenarios and strategies, and the development and application of reinforcement-learning techniques for Watson to learn its strategy for playing Jeopardy.

… The extended DeepQA system was applied to TREC questions. Some of DeepQA’s answer and evidence scorers are more relevant in the TREC domain than in the Jeopardy domain and others are less relevant. We addressed this aspect of adaptation for DeepQA’s final merging and ranking by training an answer-ranking model using TREC questions; thus the extent to which each score affected the answer ranking and confidence was automatically customized for TREC. Figure 10 shows the results of the adaptation experiment. Both the 2005 PIQUANT and 2007 OpenEphyra systems had less than 50 percent accuracy on the TREC questions and less than 15 percent accuracy on the Jeopardy clues. The Deep-QA system at the time had accuracy above 50 percent on Jeopardy. Without adaptation DeepQA’s accuracy on TREC questions was about 35 percent. After adaptation, DeepQA’s accuracy on TREC exceeded 60 percent. We repeated the adaptation experiment in 2010, and in addition to the improvements to DeepQA since 2008, the adaptation included a transfer learning step for TREC questions from a model trained on Jeopardy questions. DeepQA’s performance on TREC data was 51 percent accuracy prior to adaptation and 67 percent after adaptation, nearly level with its performance on blind Jeopardy data.

The result performed significantly better than the original complete systems on the task for which they were designed. While just one adaptation experiment, this is exactly the sort of behavior we think an extensible QA system should exhibit. It should quickly absorb domain- or taskspecific components and get better on that target task without degradation in performance in the general case or on prior tasks.

Then I’m not understanding you. When you ask me “Are you saying the creators of Watson don’t understand how its components function?” it seems clear you’re asking about the components (and not the system). If you claim they are the same thing, well, that’s the whole point of disagreement.

Hey! That’s my point! :eek:

I’m not sure what singularity means in this context. Will people from today understand what is going on? No way! Will people then be enhanced? Sure, but we are also. When I was in high school I had to go to the library to look things up, and big Queens Central Library to look up anything complicated. Today I pull out my phone.
I think they’ll be loading people into computers in 50 - 100 years. I don’t think strong AI is impossible - I just think the brain simulator is going to be easier.
In fact, one could almost say we’ve already been through a singularity. How much of our conversation someone plucked from 50 years ago would understand? How much of our social interaction (including what we are doing right now) would they understand? Back then interactions with people from other countries and other continents were rare and sometimes the subject of books. Now we do it without even noticing it.
Incremental changes lead to big changes over time.

I know them well. My undergrad logic lab consisted of wiring together little boards each containing a few NAND gates or one or two flops. I learned logic before synthesis existed, and what I used to do was done at the gate level.
But this model doesn’t work for processors anymore. At 20 nm process technologies and GHz clock speeds you can’t design or debug assuming you have 1s and 0s. Logic errors almost all get flushed out during simulation before first silicon. Any that sneak through get caught quite quickly. The real errors you see involve noise, coupling, power problems, and a host of others. One interesting bug we found, which I gave a talk on, involved a signal line that coupled to a power bump and thus mysteriously went from a 0 to a 1 with nothing else happening. In fact, in one of my columns I proposed that we no longer teach people learning logic about 1s and 0s, and instead teach them that the gates that they see will have input signals with very interesting properties. Square waves, like you see on timing diagrams, don’t show up on scopes for any interesting circuits.
So, if someone’s model of a circuit is that it is made up of gates, they will fail miserably when debugging it in the real world.

I was on a business trip when the 386 was announced. In “The USA Today” article they said that with such power AI was just around the corner. We laughed.

I assure you, the result of the code I write is stuff I could never do by myself, and it finds stuff I’d never come up with. I’ve written puzzle solving programs much better at solving the puzzle than I am. I’ve read a lot of stuff about chess programs (I’ve used some similar heuristics in other applications) and I knew some people working on it at Illinois. But chess is nothing similar to intelligence. Those who thought computers could never beat people at chess didn’t understand this and were wrong - but people who thought that the fact computers could beat people said something about AI were equally wrong.
Kasparov saw intelligence because we are wired to see intelligence in things we play with or talk to. But Weizenbaum’s secretary got fooled by Eliza. We’re not good at this.

Forget chess - starting in 1959 Samuel wrote a checkers program, which he trained by having it play against itself. Checkers isn’t as interesting, but I believe I’ve read that checkers programs are far better than any human.

The old (maybe new) model of AI was that if you do enough of these pieces and stitch them together, you’d have intelligence and also learn something fundamental about thought. Didn’t work. You learned a lot about heuristics, and created cool code much of which is useful, but we’re no closer to understanding intelligence. We need a framework. They’ve tried, but none seem to work.

I vaguely thought about working for the AI lab, and had an interview with Minsky, but I failed my Lisp test (I hate functional programming) and given what happened it was just as good I did.

Like I said, most of the immediate goals from my class have been met. If you define AI as speech recognition, chess, route creation, solving hard equations, AI has done great. If you define AI as understanding and building a model of an intelligent entity, it has flopped.

I’m asking if you’re claiming Watson, specifically, has emergent elements its creators can’t account for.

Here is the problem with systems/components. Yes, you can understand how the components work, quite well. If you assemble components, and understand how they interact, pretty much, you might get lucky and build a working system. But a vast majority of bugs involve component interaction. On the software side that is the reason for object-oriented programming. In the old days hardware designers were luckier than software designers because no matter how you screwed up block A couldn’t mess with the internals of block B unless you routed a wire there. Today, not so much. Block A can switch massively, cause a power glitch, and screw up Block B even with no direct interface.

And when you design systems, you need an architecture. We don’t have an architecture for AI today. We don’t know that we have all the components required for AI, and we definitely don’t know how to hook them up.
And, to get funding and write papers AI people work on a problem at a time, in isolation. But it is more likely that our brains have one method for all these problems, and doesn’t have modules for chess playing, etc. So the component based approach to AI might be exactly the wrong approach. And I buy the computational model of intelligence - we’re just using the wrong model.

That’s an interesting analogy but I don’t think it’s apposite. There’s more than one big qualitative difference between something like a computer mouse and something like intelligence.

First, although it’s absolutely true no one person knows every detail of putting a mouse together, there are people who understand the big idea. They understand the general principle by which each component of a mouse works. That’s why the final assembly team knows who to call to get the parts they need. They have a general idea that’s good enough that they can contact the correct suppliers. That general knowledge doesn’t exist for intelligence. Literally no one has any idea how the general idea works.

Second, the knowledge of building a computer mouse can be divided into discrete pieces, each of which can be understood separately without need of understanding the whole. No one grasps the perfect entirety of the mouse, but each discrete piece has at least one person out there in the world that completely and unconditionally understands it. It is the very fact that each discrete piece is fully understood by someone out in the world, who doesn’t need to know what other people are doing, that allows the mouse to be mass manufactured from a global supply chain. This is the key point: each person’s knowledge is not conditional on anyone else’s knowledge. This process is so diffuse that we don’t even know how many people we’re talking about in total, but we still know for a certainty that each person has a handle on their particular job, regardless of what other people are doing.

But in a chaotic system, a working knowledge of each piece is totally dependent on every other piece. To borrow one of the simplest examples from chaos theory, we can look at the Lorenz system. The change in x is influenced by y and x. The change in y is influenced by x, y, and z. The change in z is influenced x and z. The causation is recursive. Everything ends up causing everything else.

What this means is that it is impossible to become an expert on x by itself, because adequate knowledge of x’s behavior depends totally on y and z. Insight into the problem cannot be outsourced to different suppliers, because every single time they alter one of their pieces, that alteration will change the entire picture of aggregate behavior. That leads back to the problem of point one: no one understands exactly how the aggregate is supposed to work. That’s what emergent properties from chaos theory are all about. The Lorenz system is simple enough that we can hold the entirety in our head and untangle the nature of the three equilibria. But intelligence? That’s something else entirely.

Of course, this doesn’t preclude the possibility of developing a general machine intelligence. But it does seem more likely to me that such an intelligence will be evolved, rather than deliberately programmed.

I’d be lying if I said I understood chaotic systems, but the points you make are interesting.

However if we don’t understand the general idea of intelligence, how do you claim that building it requires interdependent groups who understand each others part in creation of the system? A global economy seems like an interdependent chaotic system that no one person can build or run but we still manage to build and maintain those.

Even if intelligence needs to be evolved instead of designed, I’d assume things like genetic algorithms could be used to speed up that process.

OK, my apologies, that wasn’t how I interpreted your original question.

I think it’s misleading to consider that emergent behaviors are those you “can’t account for”, as if they have to be somehow mystical or even necessarily surprising. I think a more meaningful characterization might be to say that a system’s behavior is emergent if it is the aggregate result of complex interactions that you wouldn’t know how to formally specify – that is, that you wouldn’t know how to explicitly define the necessary behavior with a procedural algorithm. This is no doubt true of the end results of the interactions of Watson’s many individual major components, but it’s even more clear if you consider the adaptive training the system went through. The end result contains behavior-driving knowledge that was not – and could not have been – specified procedurally. This result – the integrated components and their optimally adapted states – is not just greater than the sum of the original parts, it’s a qualitatively different entity.

If “no one has any idea how the general idea works”, it’s funny then that we’ve managed to build many intelligent systems in specific domains, including domains for which many pundits conclusively declared it impossible for machines to outperform humans. It seems that every time that happens, the goalposts get moved, and the same pundits declare that that wasn’t “real understanding”, it was – as John Searle likes to say – just “symbol manipulation”. The irony is that these are mostly eclectic philosophers who are the very personification of having no real understanding of the field they are critiquing. These people are just rejecting the idea of computational intelligence and, by the same token, of the computational nature of the mind. But as Steve Pinker correctly observes, all intelligence is intrinsically computational – “patterns of interconnectivity that carry out the right information processing” – and intelligence is simply an emergent property of that process.

And how do you think a system like Watson works? It has over a hundred different principal components, many of them developed by entirely different groups and sufficiently loosely-coupled that they don’t even run on the same machine: natural language parsing, query decomposition, knowledge search, hypothesis generation, scoring, ranking, and confidence assessment, and the game-play strategy. And underneath all that is the supporting layer of runtime environment, operating system, and the underlying hardware. Watson is the synergistic total of all of them.

How on earth do you make the leap from AI to chaotic systems? What does that even have to do with it? Chaos theory has a few very specific applications in AI like predictive modeling and machine learning, but that’s about it. That whole argument is utterly irrelevant to AI. Intelligent systems of any degree of complexity are routinely modularized and “outsourced” to different skill sets – Watson is a great example of that.

Evolutionary computation will probably be an important methodology for AGI, yes, though by no means the only one.

Well, that’s a excellent point.

I almost considered exploring the differences between the chaotic characteristics of a global economy with (what I perceive to be) the chaotic characteristics of intelligence. But I wasn’t sure it could discussed briefly in adequate fashion so I decided to drop it.

Economics sometimes carries a lot of baggage, so let’s shift from the economy and go briefly back to that Lorenz system I talked about above. It was originally created to model atmospheric convection. We have no computer in the world that could possibly simulate all the atoms of the earth’s entire atmosphere. That’s not how meteorologists roll. They create a simulation based on the underlying principles of physics – which we do understand from smaller-scale experimentation. They run their simplified models in the hope of extracting some insight which they can apply to understand the real atmosphere, and they’re able to do that because there is a fundamental relationship between their own computer models and the workings of the real atmosphere.

This is to say: the general theory of meteorological phenomena is not especially complicated. The basic idea, they understand. Look at a topic like hurricane prediction: it’s been getting better, and better, and better. What they simulate on their computers has managed to save lots of real human lives, because the actual hurricane follows very closely to what their models predict. And this is still a chaotic system! “Chaos” is not really the best word for this, because what it actually means is not that prediction is impossible but that any prediction will inevitably degrade in quality over time. But giving folks a week notice that a major hurricane is incoming is pretty damn great for our purposes.

The system as a whole is utterly beyond us. But we don’t try to build up the system as a whole all at once. We build up from basic principles. We start with the discrete pieces of knowledge that we do, in fact, understand. We run simulations from those basic principles, and we hope the insight that we gain applies more generally, because the entire world is just a massively scaled up version of a simpler problem that we understand. (We don’t have to accept that this is true in economics to accept that it’s definitely true in meteorology.) We understand how the small puzzle works, and we believe that the big puzzle works along the same general theoretical guidelines.

But we cannot say the same about intelligence. As I said before: 1) We do not understand the big picture, and 2) we do not understand the discrete pieces.

We cannot scale down intelligence to the level we understand, and then use that simplified model to gain insight into the big picture, because when we dumb down the models to levels we understand, we are no longer looking at anything remotely resembling an “intelligence”. We do not have a general theory of intelligence in the same way that we have general theories of atmospheric physics (or to a lesser extent, general theories of the economy). It simply doesn’t exist. Having said that, we are not necessarily precluded from achieving a theory of general intelligence that is within our mental grasp. It might be much simpler than I expect. A human brain might be a huge complicated work-around based on a myriad of evolutionary pressures, where the sufficient software to create an intelligence is actually much less complicated and within our mental grasp. We might actually engineer a theory of general intelligence (rather than evolving an intelligence). If that happens, we would be able to improve on that understanding ourselves.

But I gotta say, that sounds strikingly unlikely to me. If intelligence were easier, I think it would be biologically more likely. This might be an ill-considered point on my part, but it’s the way it strikes me right now.

I think I agree, but that’s not quite the way I’d put it.

I’d say rather that genetic algorithms are one plausible way for intelligence to be evolved. And if we used genetic algorithms to develop an intelligence, we could of course keep using them with the hope that more breakthroughs would be made. If we have an evolved process, we can just keep on truckin and hope it keeps working. But I wouldn’t call that “understanding” in the same sense that I was using the word earlier.

My reason for stating that is because of your statement about the engineers not knowing whether Watson was deterministic or even to be fully aware of the set of behaviors it could exhibit, as if it could gain new behaviors beyond the sequence of data manipulation programmed into it.

I’m pretty sure I haven’t opposed computational theory of mind, but I also don’t fully grasp all of the in’s and out’s of any theory of mind to know whether I think it fully encompasses everything the brain is doing.

I would say my basic positions are as follows:
1 - The brain can be modeled with a bunch of math
2 - We can achieve the same thing as a human brain with non-biological systems

The arguments regarding symbols and syntax and semantics is where I can’t really come to a conclusion with my knowledge. For example, clearly at some level and in some cases there is symbolic processing, but there is low level processing going on (reaction to pain) that doesn’t seem to fall into that kind of a categorization. In addition there are low level changes (magnetic field, drugs, etc.) that alter high level functionality but don’t fall neatly into the high level descriptions I’ve seen (maybe those are easily handled, just not sure).

I do have many years of experience evolving brains for artificial life simulations, so if that is the type of thing you are referring to, then yes.

You seemed to be implying Watson could gain some capabilities beyond the Q&A capabilities it’s entire sequence of steps is hard coded to perform, as opposed to just improving on it’s answers using the built in methods of optimization.

Well, it’s good to have something we can agree on! :slight_smile: I’ve now remembered that my impressions about your opposition to it came from this rather lively debate about mental imagery.

Some of what I was saying was about the potential behaviors of intelligent systems in general, but some of it does indeed apply to Watson/DeepQA. How do you distinguish between “just improving on its answers using the built in methods of optimization” and “gain[ing] new behaviors”? This is a rather crucial point, and the word “just” is misapplied. Indeed without those adaptive capabilities DeepQA would have no useful commercial application at all.

And the point is this: generally speaking – not specifically about Watson which is a rather ad hoc assemblage of different methodologies – these techniques are not just casual tweaks but potentially foundational aspects of how these systems work. Indeed one could plausibly begin with a starting point that had a system with no useful knowledge or capability at all except the engines for adaptive self-improvement. There is a vast difference between the relatively well defined task of designing and building such engines and the task of designing and building the kind of intelligent machine that might result after many cycles of adaptation, the latter potentially being impossible for us to create from scratch. And there’s a vast array of techniques that can be used to achieve these results, like different methods of learning (supervised and unsupervised, passive and active reinforcement learning) and dozens of associated learning algorithms. And then, orthogonal to that, there are evolutionary computation techniques that mimic biological evolution to develop optimization strategies, generally better suited to problems that tend to be ill-defined and amenable to “soft computing” approaches. Not all of these approaches are necessarily winners, but we have to get away from this idea that AI systems have to be “programmed” in the classical sense of hardcoded algorithms.

The improvement thing is harder than it seems on the surface. It requires a few things:
1 - Encoding rules to accurately judge “improvement”
2 - An environment that requires/rewards those attributes

I ran into this problem when evolving creatures, their behavior was extremely determined by the rules I chose for survival/winning. I know, that sounds obvious, but what was not obvious was the extent that the rules guided behavior/evolution and how to break out of that and establish one set of attributes in the creature, retain them and continue the evolution adding more advanced attributes.

For example, my creatures learned to “see” (vision was 20 radar-like beam sensors) and could find and follow food and eat. But, due to the rules on energy consumption, density of food in environment and nature of movement of the food, they devolved from searching/following/finding food into a simple circling process with eyes pointed outward to more efficiently locate food. The circling behavior would dominate until I tweaked the balance so the creatures retained their seeking and following behavior.

I ran into many of these types of things and continued to tweak around them as I pondered the next and more complex stages and tried to determine what types of attributes should exist in the environment to attempt to extract more advanced behaviors. What I believe now is that even down at the relatively simple level that I was operating, the balancing act to point the evolution in a desired direction is far more fragile and unintuitive than I naively assumed.

The guiding of the evolution process (IMHO) would be a complex field in it own right involving lots of math to properly understand the balance and ensure the “improvement” matches the goal of the humans.