Unspoken if’s? Really?
Let’s briefly skim my post…
“Every discussion about AI is built on the first big if.” “All of this is still built on a very big if.” “Of course, we don’t know if we have the tools to start the evolutionary process you talk about.” “Assuming the evolutionary process works – and that’s a big if – it will continue to work well past human intelligence.” “If we can develop a human-level intelligence in human time frames, rather than the geological time frames that it took to develop us,” “I’m skeptical we’ll crack this nut anytime soon, but…”
The word you claim to be “unspoken” appears at least 15 times in my post. I’ve had people ignore what I say before, but you probably just broke the record.
Here are your own words. “I do think we will eventually create AI’s of human or slightly above human intelligence.”
You are complaining about “unspoken ifs” that not only were not unspoken, but some are actually points that you fully agree with. You pounce so eagerly, you end up attacking your own arguments.
At least you’re right that Blake’s post was good and on target.
Good argument. I don’t disagree.
My post was obviously unclear, but I was trying to talk about a “theory of general intelligence”. That is to say, understanding the underlying principles by which the most basic machine intelligence might be engineered. I don’t mean full understanding of its own current processes, which as you say wouldn’t work. To jump off another point you raised:
Great analogy. Let’s shift from animals back to computers, back into the world of big ifs. Suppose we have a general intelligence, Version A. From Version A and some evolutionary process, eventually a Version B is developed which is similar but noticeably better.
Version B wouldn’t be able to tease out the difference between itself and Version A. But time passes. New versions are developed. This might be more hardware than software improvement, but however it happens, improvements are made. Eventually, given enough time, a Version Z would be developed – the beneficiary of many long years of hardware advancements – and if it has enough processing power, it would eventually be able to tease out the underlying reasons why Version B’s software is superior to A’s. Which is to say: it is fully plausible that it would be able to engineer an intelligence that worked on Version B’s original hardware, but which was more efficient than Version B. Call it Version B-Prime. Version Z would be a machine intelligence that could engineer new machine intelligences from scratch.
This is what I mean by “understanding itself”. Not full understanding of its own current processes – which would be impossible, as you rightly note – but understanding the basics of machine intelligence, the family to which it belongs. It would be able to decipher the principles that underlie the differences between Version A and Version B, and the differences between them and their predecessors.
We might even imagine that the primary difference between Version Z and Version B is the hardware they run on, more than the software. That’s another if, and maybe a big one, but it’s not unreasonable. We can’t link ten thousand human brains together to understand the difference between a singleton human brain and a cat brain… but computers might just manage to do that. So if Version B-Prime were given the same hardware that Version Z runs on, it might just be an absolute improvement in efficiency. Hardware improvements that are powerful enough might conceivably lead the way to software improvements, which would make subsequent hardware improvements even easier and more likely. We don’t know the limits to this, so it’s easy for our imaginations to run wild and visualize the process continuing strongly for some time. This is where I think the “singularity” people mostly fall.
Obviously, I don’t see this sort of self-reinforcing process as inevitable. But still, it has a certain plausibility. At very least, I think it’s fully defensible that eventually a Version Z would be created that could understand Version B, even if nothing practical could come from that understanding. And I do think that sort of insight qualifies as a form of it “understanding itself”, especially if the main difference is massive hardware changes rather than software.