Computer Singularity by 2045?

First of all, a geometric curve is an exponential curve. Did you mean to say “polynomial”? Second, what absolute measure do you propose to use? I can put that “slow region” anywhere on the curve I like, just by picking an appropriate line to compare to.

Yes, I meant “polynomial” there.

What absolute measure? It depends on the context. For example, if we are talking about a fixed APY banking account (follows exponential growth, ignoring inflation), then if the interest I am earning is less than a dollar per year, we can safely say my account is in the “slow” region of the growth curve. The interest I am making is negligible relative to minimum cost of living, and it will take longer than my expected lifetime for the interest to compound to the point where it is on the same order of magnitude of the minimum cost of living. If, on the other hand, we wait a few hundred years, we reach the “fast” region: where the interest alone is not only above the minimum cost of living, but so high that each yearly compounding raises the interest by more than the minimum cost of living. In other words, in this context, the transition between the “slow” and “fast” regions is where the rate of change of the exponential is of the same order of magnitude as the minimum cost of living.

Every example has an analogous absolute measure, even technology. Do you deny, for example, that the region between, say, 10000BC and 5000 BC, has a rate of change of technological development that is slow in an absolute sense? Relative to the time scale of the human life span? The doubling rate may always be the same for an exponential, but if you are starting out with near nothing, then there is near nothing to double…

Nitpicks on curves… Debate drifting, does not compute.
Well, I have to say that I’m more in favor of people like Jeff Hawkins (The reason why I agree most with him is that he has experience on both the technology and the biological sides of the attempt at creating artificial AI):

I think we will reach something that we would call a singularity, but it will not explode on us, it will take centuries to develop.

Just read through the thread, here are some comments:

Simulation
What does it mean to “have” or “achieve” AI? To me it means that we not only have something that can perform the functions that the human brain performs, it also means we understand the construction to the point of being able to improve it and/or use all or parts of it as a tool.

If we merely simulate an existing brain, while impressive, and eventually it could probably run at a higher speed than the human brain, it still feels like we haven’t cracked the actual problem yet. It’s a step towards being able to build and control intelligence but not the final goal.

Regarding practicality of simulation:

  1. It must interact with it’s environment like we do - our brains are not isolated - which increases the requirements of the simulation but not a deal breaker.
  2. You must be able to ACCURATELY simulate all chemical, electrical and magnetic interactions that are internal and external to the neurons as they have been shown to impact cognition. iamnotbatman seems to think this will not be a problem with near infinite resources - but near infinite resources is not what we will have and is not what is predicted.
    Worm Brain:
    300 neurons completely mapped for something like 20 years and they’re still trying to figure that one out.
    Blue Brain
    It’s interesting, but it is so far from accurately describing what is going on with neurons that it’s not reasonable to say it’s an accurate model. And they probably wouldn’t make that claim anyway.
    Poster Hellestal and Simulated Evolution of Brain
    I wonder under what specific conditions in an environment is intelligence a superior solution to all others. In other words, how can we be sure that the level of intelligence that we desire will evolve? Do we know what those conditions are and have we set them correctly in the simulation?
    My Opinion
    We are slowly making progress in a couple areas:
  3. How the components of the brain operate
  4. How some portions of the brain are structured to solve some specific problems
  5. Function approximation using various techniques (nn’s, recurrent nn’s, high-order nn’s, svm’s, etc.) - and these techniques tend to give better results at mapping input to output when presented the types of problems humans deal with daily (pattern recognition) than all of the previous heuristics and symbolic type of ai from the past.

I think we will get there eventually, but not by 2045, there is just too much to learn.

Yes, but I think I detect the assumption that in those 20 the tools on how to map those things have remained the same.

I think many are ignoring that it is not only computer power that is increasing, but also the power of the tools used.

IMHO there will be no “explosion” or an event where from then on it would be impossible to predict what would take place, but if we are talking about really building intelligent machines? I think we will absolutely get there. After looking at the current research out there, I will have to say that we will get there sooner than 2045. But I disagree with Kurzweil with the part that we will become immortal, there will be no uploading or swapping of minds.

But the mapping was done a long time ago. Mapping is not the problem.

The problem is understanding how the interaction of those neurons results in a control mechanism for the worm.

We’ve (we as in humans) had 20 years to do the math and figure out how it is slicing and dicing the input data to result in a successful worm control system and it’s not easy. Although I did read recently something about success somebody had with reproducing the worm chemical gradient navigation system.

Which research are you seeing that leads you to believe we will get there sooner than 2045? I realize it’s all a guess at this point, each of us looking at little chunks of a boatload of research and then extrapolating from there, but I’m curious about your thought process.

It is based on the research by Hawkins and others regarding Hierarchical Temporal Memory including HTM Cortical Learning Algorithms.

http://www.numenta.com/htm-overview/education.php

Whoops, those are the papers and very technical stuff (Recommended though for advanced people who want to learn more), one good place to start is the TED speech that he made:

http://www.ted.com/talks/jeff_hawkins_on_how_brain_science_will_change_computing.html

Basically, all previous attempts at AI had the problem that there was no good theory of how the brain worked regarding intelligence. Hawkins is reporting IMHO a good theory that is allowing him and others to make progress.

Transcript in a link on the page.

Whether a simulated intelligence counts as AI will be a matter of opinion. But it will be vitally important for a few reasons. First, it will show that there is no mystic googoo stuff involved with intelligence, and that have as big an impact on the world as evolution, if not more. Second, we can instrument the simulation and see how consciousness actually works. Third, we can modify the simulation and see the impact. This will be how we get real understanding.

When we build the inputs to the brain out of hardware, and they will be digital, of course, creating simulated inputs will be trivial - and necessary to control debugging. Converting real stimuli to whatever format is required will also be fairly easy - we do it all the time.

That remains to be seen. You can often remove all sorts of factors and still get a good result. Clearly the chemistry of the brain and body affects our thoughts - but is simulating them exactly necessary for the simulation to have thoughts? The state of my stomach affects my thoughts, but it is not a necessary component of them.

I remember reading about HTM a few years ago, had forgotten about it,

The problem is that there don’t appear to be any scientific research results. It’s nice to have a theory (and there are lots of them floating around), but the real progress is made with science and math, and their website and that paper are pretty vague. I do see people commenting on forums that HTM has had good results in some specific areas like vision, but I also read from others that it didn’t match the 3D rotated object recognition that has been achieved with other methods.

Could be valuable, or maybe not, but probably not something to get too excited about until real results that exceed other methods are published.

:confused:

At the bottom of the page we have independent research with confirmations to what Hawkins and others propose:

BTW now your turn, what other methods are **more **effective? Can you point at the published papers and research?

Agreed, it would be valuable.

I don’t think anything will be easy, even just converting stimuli for all of our senses, internal workings, etc. - there are far more influences than we are probably even aware of (for example - mice gut bacteria shown to influence cognition, could be true in humans).

Well, I guess you could have a simulation that did not really operate exactly like the thing being simulated, but it would still be far closer than a random bunch of neurons. I’m not sure where you draw the line between valuable and not quite, and I’m not sure to what extent we are talking about dropping out functions of the components of the brain. Maybe the exact electrical field is hugely important or maybe we can be off by a large amount at any given point, again not sure where to draw the line.

I didn’t notice those at the bottom of the page, I had googled for “numenta htm research” instead and poked around, there wasn’t much when I did that.

I read the first link at the bottom of that page and these are my thoughts:

  1. How did ann’s and svm’s perform against the same data set? That appears to be a straightforward visual classification study but no comparison to other machine methods that have been around for 20 years.
  2. It’s unclear to me what the “temporal” component of that particular experiment was, yet the researchers seemed to mention it a few times. Not sure what is going on there.

Regarding “more” effective:
Various types of ANN’s and SVM’s have been used for a long time - would need to see direct comparisons of HTM to these others to say which is better
Either way - ANN’s, SVM, HTM - all are function approximation methods that are good for classification/pattern matching - but that’s just the foundation - none of them are a model of the specific circuits/functions of the human brain, just tools. Lot’s more work to do.

I tend to agree with **Voyager **on this one, Converting real stimuli to digital and applying it in real time to a system is something that is being done now in the lab, and sometimes at mind-boggling speeds.

That one does employ visual and tactile stimuli.

I think we are putting too many hypothetical restrictions, I would rather do the experiments and then check if indeed we are missing something in the simulations and actual devices.

Uh, that doesn’t help much, are you really aware of where current research is happening or to point me or others to the sources of that research you mention?

And I can find more recent papers and research referencing the theories of Hawkins,

MODEL-FREE LEARNING FROM DEMONSTRATION
Erik A. Billing, Thomas Hellstr¨om and Lars-Erik Janlert

And

Object Identification in Dynamic Images Based on the Memory-Prediction Theory of Brain Function
Marek Bundzel, Shuji Hashimoto
Shuji Hashimoto Laboratory, Graduate School of

Among others, as the last paper mentions, there are still improvements to what Hawkins proposes, but it is a good way to go forward.

My point was primarily that the brain has a large number of inputs from stimuli and they are important and non-trivial to reproduce. That’s all.

We know that changes in brain chemistry alter cognition. For example, changes in omega-3 levels has recently been shown to alter synaptic function. These types of things can’t be ignored (you could end up with a depressed robot :slight_smile: ).

I know a little bit about this subject as a practicing computer scientist who has been recently engaged to create an AI (really an expert system, but don’t tell my CEO because he LOVES the idea of promoting an AI), and perhaps I should toss in my two bits here.

First, there is a genuine physical limit to how fast electronic and photonic system can process data and until such limits are breached (by some Star-Trek-like FTL magic) we will not be able to go beyond these hard limits.

Second, even now, in terms of connectivity, memory and processing speed, there are many computers that far exceed the physical capabilities of a human brain. Obviously, the hardware side of the equation is not the barrier to hard AI,

Third, I propose that the first AI will come from studies of simple neural networks in animals which will then be modeled in complex massively parallel computers. One might imagine a sort of evolution within the modeled ‘minds’ that will eventually become complex enough to evince what we might call a ‘personality’. How long this will take is anyone’s guess, but when it does it almost certainly will not spell the end of humanity as we know it. In my opinion, long before that, some folks will have enhanced their sensoria and augmented their memories with implants and virtually instant access to whatever Internetwork that develops from what we now call The Internet.

Finally, though I am not a distinguished nor an old scientist, I would remind the forum of the following: “When a distinguished but elderly scientist states that something is possible, he is almost certainly right; when he states that something is impossible, he is probably wrong.” — Arthur C. Clarke (Clarke’s First Law)
ETA: Heh heh heh. I see that much of what I suggested has already been proposed by others. I really shouldn’t start a post, then work for several hours, finish the post and then review what everyone else has posted. Bad Gagundathar! No biscuit!

I read articles about research frequently, but I typically arrive at them through one of two methods:

  1. Articles linked to on machineslikeus.com
  2. Googling for specific topics with the term “research” or “study” included (high order neural networks, recurrent neural networks, support vector machines, machine learning, etc.)

So if I was to point you to anything, it would just be google using the methods I typically use. For example “research object recognition 2010”.

Probably not what you were looking for, but that’s how I typically find these things. This is fun stuff and I spend some time with it, but it’s not what I do for a living so I can’t give you a quick recap of the current state of research in any specific field.

It may be great, but most of the things I read about a method that is “better” than other methods also test the performance of those other methods in the exact same study and then compare and contrast the results. Maybe this one you linked to does that, not sure.

Still, what you are mentioning is an artificial restriction in the way to making an artificial intelligence. Eventually, with experimentation, we should be able to find the possible alterations to an average artificial intelligence that will have to be done to simulate and find the reasons for things like depression.

And yes, I’m record that many fear that we will get “colossus” from the Forbin Project, when we likely will get “Marvin” from the Hitch Hikers Guide to the Galaxy.