The problem I have with the singularity

I agree. It’s a complex field but it is, in fact, a field of study in its own right in the general category of guided random solution search techniques that’s generally referred to as evolutionary computation. But more importantly it’s only one of many ways of building intelligent adaptive systems – there is, for instance, as already mentioned, a whole taxonomy of machine learning methodologies and associated algorithms.

My basic point being that intelligent adaptive systems can solve problems that can be solved in no other way. This is because problems in computational intelligence can quickly outstrip our ability to formally define the solutions, not just because of the sheer complexity, but more fundamentally because the problem domain itself may not be well understood or even amenable to such understanding.

You just answered your own question. It evolves through generations of iteration and selection. The difference is that unlike a human brain where biology limits each generation to roughly one every 25 years, the AI iterations will occur at an increasingly faster and faster pace.

I don’t think anyone disagrees that evolving solutions is a great tool.

But evolving solutions requires clear direction for the evolutionary process, and that is as much an unknown to us currently as it would be to create intelligence with any other method.

Something pretty interesting to walk through in your mind is trying to figure out what kind of a simulated environment would “evolve” the kinds of attributes we are interested in.

What attributes would an environment need to have to result in consciousness evolving?

You’re missing my point. The mistake you’re making is putting a very narrow interpretation on what “the evolutionary process” means. What you’re describing is the very specific field that seeks to literally apply the principles of random mutation and natural selection to the computational paradigm, through techniques like evolutionary strategies, genetic programming, and genetic algorithms. Some even employ biological nomenclature – “populations”, “individuals”, “chromosomes”, “DNA”, etc.

But the general area of adaptation is far broader than that. It can involve many different techniques for machine learning, for instance, which is a huge field of study with many powerful techniques and capabilities that has nothing to do with evolutionary computation. It can also involve all kinds of techniques for knowledge representation, fuzzy logic, probabilistic reasoning, and biology-inspired approaches other than EC, like neural networks, with machine learning potentially underlying all of them.

I’ll answer it this way. Putting aside nitwits like Dreyfus and Searle who outright dismiss the idea of computationally-based intelligence and think it’s just “symbol manipulation”, I think most of us can agree with the following statement: that all intelligence, whether natural or artificial, is an emergent property of computational systems.

I’ll offer the opinion that consciousness is an emergent property of computational systems that becomes manifest beyond a critical threshold of capability.

Adaptive methods I know of all have some metric to measure if they are going in the desired direction. I can kind of see one if you are trying to create intelligent systems. Perhaps you can measure time required to solve certain problems. But I have no idea of how you would do this for strong, self-aware, AI.
We don’t even know of self-awareness predated or postdated speech. We think it is evolutionarily advantageous (though the jury is still out) but I wonder when it became so.

The mistake you’re making is that you keep telling everyone in this thread they are making a “mistake”.

You can’t just say “adaptive self-improvement” and be done. “Improvement” requires a classification of “better” and “not-better”. It requires a method of identifying good results. You will never end up with “better” unless the system or a person knows what it means to be “better”, is able to identify it (not always easy) and is able to retain it, and that further iterations don’t stomp on the previous “better”.

So, let’s work through an example:
1 - You have a system, maybe a simulation, with things that initially have limited to no capabilities and knowledge.

2 - One of them, through some form of adaption or evolution, acquires the ability to add a couple numbers together internally.

How do you identify that this happened?
How do you create a system that knows this should be retained?
How do you prevent further iterations from stomping on that attribute?

I frequently get the impression that you just haven’t read what I’ve written. I feel like we’re back in the mental imagery discussion. You seem stuck on the techniques modeled on biological evolution even though they have virtually nothing to do with the theory and techniques of machine learning.

It goes without saying that there are necessary preconditions for learning to be successful, which include some necessary baseline of existing capability, appropriate knowledge representations, and feedback to evaluate success against clear objectives. Hypothesis evaluation is certainly a basic principle in learning theory, so to that extent your point is not entirely without merit, but it’s still largely irrelevant because we can usually do so effectively – that is, we can quickly arrive at statistically-derived conclusions about whether some learned hypothesis is approximately correct – an important concept in computational learning theory – and is, therefore, capable of the desired behavior within the necessary statistical bounds.

It’s also absolutely false to claim that these kind of learning strategies are somehow just as difficult to program as the solutions themselves. On the contrary, they are relatively well-bounded domains that can solve problems that can’t be solved any other way for the simple reasons that we often can’t anticipate all the possible situations a problem-solving system might encounter, either now or in changed future conditions, or as often happens we might have no idea how to solve the problem ourselves, or necessarily have a good understanding of what the problem even is. All of which I’ve said before, apparently to no avail.

Let’s look at Watson again for an illustration. Most of the conceptual components of DeepQA have many possible approaches and implementations, with differing levels of success in different circumstances. One of the most important design principles in DeepQA is that rather than trying to design optimal general implementations, it uses all of them, running simultaneously on massively parallel hardware, and then uses sophisticated learning strategies to develop its skill at deciding which of the many solution candidates is most likely to be correct. Quoting from A framework for merging and ranking of answers in DeepQA (Gondek et al. 2012, IBM JRD):

Crafting successful strategies for resolving thousands of answer scores into a final ranking would be difficult, if not impossible, to optimize by hand; hence, DeepQA instead uses machine learning to train over existing questions and their correct answers.
Likewise, Watson’s game-play strategy is substantially learned. For instance, it improves its betting strategy through reinforcement learning techniques that train a Game State Evaluator implemented as an artificial neural net called a “multi-layer perceptron”. Essentially, in the absence of a formal model of the game, Watson learns from experience, building an internal model that guides optimal strategies that eventually become superhuman. That word comes directly from the pertinent research paper, and simply means that, in respect to its learned game play strategy, Watson exhibits a level of optimization that no human would likely be capable of.

You mean the one in which you were confused despite my repeated explicit explanations?

During communication things can get misinterpreted, so we clarify. In that case you focused on the first portion of a sentence when my focus was on the second portion. That type of stuff happens.

The trick is to read the person’s explanation and move on. In that case you still don’t seem to have grasped my clarification (and you never responded to my last post that made it crystal clear again).

You keep repeating vague buzzword like phrases. Be more specific to support your argument.

You mentioned that theoretically a system that can “self-improve”, I mentioned that it’s tricky (and listed specific reasons).

If you have an explicit strategy you can describe that would help. I’m familiar with all of the buzzwords and use many of those techniques so I don’t need any more of that.

For example:
Unless a system is aware (programmed) to be looking for the capability of addition, how would a self-improving system that gained the capability of addition be aware that this is a valuable attribute?

Note: addition is just a simple concrete example, in the progression to human level intelligence (and beyond) there are more abstract higher level things that are even tougher to detect.

Do you realize that you are describing this “multi-layer perceptron” to a person that has used them for many many years in an actual brain/artificial life simulation?
On with the discussion:
The key issue is how do you create a system that can detect that a new attribute is good?
Do you think that it doesn’t need to detect it, it can just start using it and move on to the next step?
If so, how can you be sure that those steps lead to “improvement”, it could be a dead end.

I mean the one in which a number of us, notably myself and njtt, who I believe is engaged in related areas of research, spent several pages trying to correct your basic misunderstanding of what a prominent cognitive science researcher had said and your misconceptions about mental imagery. I think we succeeded in correcting you on the former, not sure if we ever persuaded you on the latter. Let’s leave it at that.

It’s a pretty generic term in neural networks that goes back to the late 50s, but I’m afraid that your experience in what appear to be biological simulations has really blinded you to major aspects of computational intelligence that are completely different and completely unrelated. (Just want to add: I don’t mean that the neural network concept is different or unrelated, just all the other research areas of AI that have nothing to do with it.)

What discussion? You’ve declined to quote or address any of the responses I already gave to those questions. In short: in real-world machine learning, there are clear objectives and statistically-based performance metrics that are used to evaluate results until a required level of capability is achieved. Even shorter: it works. It does things we couldn’t do in any other way. Or we wouldn’t be doing it. See my previous post. All the things you didn’t respond to.

This really has nothing to do with daunting questions about the long-term evolution of self-aware artificial intelligence. That’s intelligence operating at a completely different order of magnitude, and there are no easy answers to the broad question of how they can be beneficially directed – which is precisely why people like Musk and Gates have raised these concerns, and gets us back to the subject of the OP.