Computer Singularity by 2045?

My point was that a simulation of the human brain can’t be as simplistic as the current Blue Brain model (for example), it must take into account all of the chemical, electrical and magnetic interactions to really properly simulate a human brain.

If you don’t properly simulate it, then, IMO, you will not have a functioning intelligence in the same way that even more minor changes today cause human’s to not operate properly.

I think we can arrive at AI through some other method that isn’t trying to simulate the brain details but rather the brains functions at a higher level. But if you go the route of simulation, then you can’t just do it 90% and ignore that last 10% of physical activity, because that last 10% could make or break the simulation.

Well, I have to admit I was checking also if you had some background in the field, I’m coming from the technological side of things and researched a lot for a hard sci-fi series I’m developing (emphasis on hard), not much on the biological side, alas, I have to say that I was expecting that you had experience in the subject.

Really, saying others to just search with Google does not quite work even for people experienced with searches like me when trying to find the best references on the subject (I had checked and read the papers I mentioned before BTW)

What I get from the independent papers I read was that: this system proposed by Hawkins works.

Not very good for systems that perform specific tasks, but great for general purpose intelligent machines.

As Hawkins is also aware of the separation of the brain neo-cortex with other brain levels, I think his theories have more basis on how the brain works compared to what other solutions have proposed.

I don’t know if you would call it a background, but for the last 5 or 6 years I have worked on an artificial-life-evolve-neural-networks simulation as a hobby which is why I end up looking into the various articles. I spend a lot of time analyzing the problems involved and reading up on anything interesting.

One thing that I have noticed through all of this: math comes first. All of the stuff I do is fun, and could result in something interesting, maybe, possibly, but real progress comes from the underlying math.

Understood, but it depends on what you are looking for.

There is nobody with a model that can be used for human level intelligence at this point. So when you say “the subject”, which specific sub-subject of AI are you really asking about, because that is as far as anyone has gotten.

I would be surprised if anyone in the machine intelligence field would say there is anything great or even good for general purpose intelligent machines (assuming we are using the same definition) anywhere, we’re pretty far from that.

My personal take on it is it’s not due to any unique insight into the hierarchical nature of the brain (that’s not news), it’s because most research is firmly grounded in the basics and their underlying math because there is so much work to do before trying to pull it all together.

I would think that it was clear that I was referring to the items that you mentioned like the types of ANN’s and SVM’s you looked at.

http://singularityhub.com/tag/jeff-hawkins/

Well, I think that if others can make products with his theories and formulas that I would not say that there is no model or theory.

Well, when I remember that you mention depression as a joke I mentioned that a general purpose intelligent machine based on brain theory would be a godsend for research on mental health issues. And I do think that systems like Watson from IBM show that there would be uses for systems that are able to give answers to general queries.

And what I say is that the math is already here to make experiments and products based on what people like Hawkins are saying.

Well I wasn’t sure exactly what you were looking for, there is so much detail and it’s such a broad topic, you can’t just point to “the research”, it’s hundreds of thousands of papers on various uses of these techniques.

I think I am not making myself clear.

Yes, HTM has a model for classification, just like ANN’s, SVM’s, Bayesian Statistics, etc. Those can all be used as classifiers and the article you linked to is evidence of that.
But a classifier is not a human level intelligence in the same way that a hammer is not a house. You can’t get a human level intelligence by simply adding 1 billion nodes to an ANN or SVM or HTM any more than you can get a house by dropping off 10,000 hammers at the job site.

The “model” I am referring to is all of the specific circuitry in the human brain that solves the micro problems and the macro problems, puts them all together and ends up with a functioning brain.

HTM is no closer to that than ANN’s or SVM’s - they are all tools that may or may not end up in a final product in 50 or 100 years, but that is all they are at this point.

Watson may be valuable as a natural language interface to large data sets. Maybe.

Sure, you can make products, in the same way that ANN’s and SVM’s have been used in this same way for a couple decades.

And if HTM can outperform other methods when it comes to matching patterns over time, then great, he has created something valuable.

:Sigh:, please, even one that did some work before on this should have even an idea of what research facility or paper is the most important.

AFAIK Hawkins never has said that HTM is the end of the story, it is just a “tangible” part of his overall theory.

I do think that has been the case, specially when other scientists cite and use his theories. So, how about just mentioning what do **you **consider are the most important papers and products that deal with ANN’s and SVM’s?

*artificial neural networks are mentioned by Hawkins, he worked on them and found them to not be as good a tool for developing real AI.

No need to sigh, but there has been so much continuous research and real-life application of these technologies for the last 30, it’s hard to say what the most important paper is and I’m not even properly qualified to pick one out if there was just one. In addition, I read these thing, log the general gist of it (for example high order neural networks excel at image classification, but are computationally too expensive to be practical for any decent sized image), and then move on.
This is a site with a bunch of current stuff regarding ann’s and vision

This is a site with a bunch of svm stuff

If you want a basic history I can give you that:
50’s Neural Networks seem cool
60’s No they aren’t, they can’t compute XOR
70’s Wait, with 2 layers they can compute XOR
80’s Lots of research
90’s Applications - image, finance, voice, anything needing pattern matching or classification
2000’s - SVM’s are better because they were designed to be a classifier with training in mind, no local minimum, etc.
Late 2000’s Lots of variations on ANN’s and SVM’s and recurrent networks, etc.

There are a mountains of papers and a bunch of products and I’m not even aware of most of them.

Having said that, to attempt to give you a concrete answer, I think that medical diagnosis applications are probably the most important in that they can beat a human diagnosis which means lives are saved.

What is real AI? HTM doesn’t have real AI anymore than ANN’s, SVM’s or expert systems or anything. Neural Networks have pro’s and con’s, just like SVM’s and just like HTM and just like everything else.

Thanks, will check them out.

Interestingly, the Juergen site has a page on citations and mention that Jeff Hawkins has been cited in their articles.

If the simulation could pass a Turing Test, then the fact that the simulation begins to diverge from the brain being simulated isn’t important. Learning if any of these other factors is essential, and how, would certainly be valuable. I wouldn’t expect this to work the first time, nothing does outside of TV.

If you have a simulation that could pass the Turing Test then you would certainly have something valuable that could contribute to our understanding, and may be useful in others ways that I’m not thinking of.

But I don’t think the simulation will pass the Turing Test unless all of those factors I mentioned are accounted for in the simulation. My point is basically that creating the simulation will, IMO, be more difficult than having an accurate mapping of connections between neurons and a gross simulation of the different neuron types.

Can’t put my finger on it, but I still get the impression that that sounds like you will be telling Wilbur and Orville Wright that they can’t be considered inventors of the airplane because the propellers where on the back of their flying machines, or that they were ignoring many safety features. :slight_smile:

Not at all.

Wilbur and Orville did not try to fly by building a simulation of a bird. If that is what they were doing, my point would be equivalent to someone saying “don’t forget the feathers, it’s likely they play a critical role in flight”.

Wilbur and Orville were able to fly not by trying to simulate how a bird flies but rather by using an alternate design that functionally accomplishes the same goal. I actually think we are more likely to approach AI with this method then the human brain simulation. But that is just my opinion.

That was going to be also what I thought you would do next as a more pertinent recommendation. :slight_smile:

I think though that the approach has to be from multiple fronts, after all there are many features of the brain that are automatic or reflexive in nature. Looking at the progress made with lighting fast feedback systems (the “reptilian” or brain stem part in human brains) one then has to add a system that simulates the neocortex.

The point here is that if the goal is to produce a system that will do specific work, then the approach of alternate designs will be the best approach, if the purpose is more general or we are making an AI with the purpose to pass the Turing test, then ignoring how the brain works is not a good way to make progress.

I would be the last person to recommend ignoring the brain - nature has a working product, pretty hard to ignore that. And I agree progress will be made on multiple fronts.

(Speaking to the previous page a tad)

Even if, given exponential growth, we could simulate the brain at an atomic level. Well, great! We have successfully proven that humans have a competent understanding of programming and the standard model of physics! Even if it produces a copy human that can do human things faster than a human can… it’s not really AI. Even if it is in product, it isn’t in essence, I would argue. We still don’t know, at a higher level, what makes something intelligent, we don’t understand the deeper principles. Yes, we technically understand the deepest principles of all, the physics behind it, but we ALREADY understand the physics behind it (for varying degrees of “understanding,” I know that there is some debate about the specific physics of the brain), so if we can do that we have officially gained zero extra knowledge.

That’s not to say it couldn’t be useful, it couldn’t cause some sort of singularity-like event, or that it wouldn’t be kind of cool. I just think that it violates the spirit of AI, the idea of taking apart intelligence, by studying organic behavior and brains, thinking up heuristics, and refining past principles and applying them to make a whole entity. It may be possible that by the time we make an intelligent entity no one person will be able to understand the whole thing, and it requires many different, incredibly specialized scientists to produce, but then between humanity we have a basic understanding of how intelligence works on some level.

Maybe that’s more of a philosophical argument than a scientific one, but that’s how the brute force approach seems to me. Even if it could work with literal infinite computing power, it completely violates the spirit of the concept.

I agree. It doesn’t solve the problem but may be useful as a tool to better understand what is going on.

To get maximum benefit, it seems like we need to understand and control intelligence in a way that we can make it “better” (if that is a reasonable term to use).

Ah, yes…there’s the phrase that is the bane of AI researchers the world over. :smiley:

I disagree, but we won’t know who is right until we do it, won’t we?
If the Wrights were building a simulation of a bird, and based it on a duck, it would fly even without the layer of feathers which keeps the duck dry, right? After intelligence evolved, we also got all sorts of add ons to improve efficiency, or we might have features that evolved for some other reason and got mixed in to our intelligence. If we were building a simulation of a person, we’d not need the appendix, even though it is there for a good reason and even if it somehow contributes to disease resistance.

Feathers was just a crude example for an analogy that someone else proposed.

But if the things that are ignored are things that determine when/rate/strength that neurons fire - that is a far more significant problem to simulating the brain than leaving out the appendix for the body. It permeates every single firing in the brain (potentially), which, IMO, could be the difference between passing the Turing Test and just getting garbage out the other end.

If you have 10 billion functions, and you alter the input to all 10 billion functions, seems to me you will end up with a wildly diverging net result. Again, just my opinion.
By the way, here’s an interesting article about neuron sensitivity to weak electrical fields that are found throughout the brain:

I think a lot of the issues with AI come up when we take a very discrete problem and beat it into the ground, I somewhat agree with:

[QUOTE=John McCarthy]
‘Chess is the Drosophila of artificial intelligence.’ However, computer chess has developed much as genetics might have if the geneticists had concentrated their efforts starting in 1910 on breeding racing Drosophila. We would have some science, but mainly we would have very fast fruit flies."
[/quote]

Now, just like the “brute forcing the human brain” thing, it’s not like I don’t think it’s useful, I certainly agree with:

[QUOTE=Drew McDermott]
Saying Deep Blue doesn’t really think about chess is like saying an airplane doesn’t really fly because it doesn’t flap its wings.
[/quote]

The point is, I agree with the full intent of the original quote, Drosophila are incredibly useful in biology, and chess can be incredibly useful in AI. The problem is, how many world champion chess grandmasters are there in the world? It seems like we’re beating these specific problems into the ground. I guarantee that given any random sampling of people, you can find more people who can watch two cartoons from the same show and be able to give a reasonable prediction of the type of antics they’ll see in the third one than you will find people who have absolute mastery over chess, or even knowing every possible literary trope that can occur in the episode.

AI isn’t playing chess or winning Jeopardy, these are TOOLS, but we get bogged down in making something that can win, not the thought processes behind these tasks. Yes, we model adversarial searches and database lookup and uncertain reasoning, but we’ve missed some of the great underlying principles such as: why can a person, given adequate preparation, be able to play both Go and Chess competently? I mean, we have a Chess computer that can beat world champions… and a Go program that can be beat by an 8-year-old. This isn’t the computer solving problems, this is people solving problems and then generalizing it for the computer. This isn’t wrong, by any means, but I think we need to step back and try to generalize MORE, find broader aspects of cognition, find a way to make the computer generate a heuristic for a given problem.

There are other things that people can do that are amazing, such as see a couple Wile E. Coyote cartoons and be able to discern from any further ones “Okay, he pushed a rock of the cliff, Coyote will likely get hit by the rock in some fashion.” Now that specific case is hard to solve because the modelling of percepts (i.e. seeing and cataloging the images, compartmentalizing the various objects into what they represent, hearing the sounds) is as or more complicated as the reasoning itself, but those kind of problems, that damn near any mentally fit person can do and yet present astounding insights into incredibly complex pattern recognition, are the kind that should be looked at critically. Hell, just figure out why a 5 year old can look at a stick figure and realistically interpret it as a human being despite the two barely having any similarity. These are the kinds of things that are going to really advance AI in a meaningful way, not solving any specific problem like Chess, Go, Checkers, or Extreme Mountaintop Egg Pasteurization.

Easier said than done, I know. We’ve been banging our heads against the wall on natural language processing for years. It’s just I think that that’s a better direction than optimizing adversarial searches.