Computer Singularity by 2045?

I agree with your post again Jragon, similar to my own thoughts.

Forget all-encompassing human level capabilities, start small and work to build something that learns dynamically, predicts and solves problems within a certain environment (reducing complexity initially), but has general skills, not specific to 1 game or situation.

I know some people have done this over the years with various methods, but I don’t know how general the problem solving skills have been. If you changed the environment so the problems and solutions were different, but still retained a simplified level of complexity (simple vision, maybe some build in object recognition), and the intelligence was still effective, that would seem like real progress.

We can’t even make a quadruped robot that comes close to what a dog can do, let alone a cheetah. We can only crudely duplicate some organs. We make rigid submersibles out of metal, not flexible dolphin robots with artificial muscles. I have the feeling by the time we’re even close to duplicating a primate brain we’re going to be living in a medical utopia.

Actually, Boston Dynamics 4 legged robot seems to be a pretty positive step forward. Not dog level, but impressive I think.

Not a dolphin, but already there are fish and snakes.

These researchers in Germany are already making robots like ASIMO to identify even new objects by generalizing groups.

What these researchers are doing is to use general properties of objects (A basic idea described by Hawkins) and this allows the new machines to deal with objects that were never seen before by the machines.

To me this is real progress.

FYI: That’s an example of using SVM’s (as the final process, after a bunch of other math has already happened). This is a link to their paper and you can see the trail of relevant research before them listed in the citations. I mention it because you were asking about important papers and all I could really say was that there was a ton of it - this kind of gives you a feel for that.

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.61.5723

I misread the paper, they mentioned Ude and Cheng and compared to their approach, I thought they used Ude and Cheng’s approach. They just said that they used a similar approach.

If I understood correctly, SVM is not excluded from the tools used at Numenta. As reported in a past paper, they mentioned that by combining the SVM with the HTM network they predicted that better results could be achieved, as it seems to be the case.

The point is that this generalization trick was mentioned by Hawkins before and I pointed out that it is what the team in Germany is doing. So yes, most likely a similar approach.

I wasn’t really commenting on HTM or Numenta, I was just telling you how the people did their object recognition that you linked to. Thought you would find it interesting.

“better results” and “seems to be the case” sounds like you are saying either Numenta or someone else used Numenta’s HTM software to build object recognition that outperforms other methods.

If so, can you provide a link?
Also, combining SVM and HTM would certainly make sense, along with any other techniques that work, good solutions to complex problems will most likely make use of lots of different types of tools.

Hawkins may have mentioned it, but to be fair to all of the researchers out there, “generalization tricks” and multiple stages of processing and abstraction for object recognition have been around a long time. I don’t think he is saying anything new in that regard.

It is.

As I see it, it is when one can see product and important clients that then it is going beyond the hypothetical. In industry it is essential to offer better solutions or die in the attempt. :slight_smile:

http://www.numenta.com/htm-overview/htm-algorithms.php

I think that is the key, to be flexible and not to dismiss new systems just because there is a mention of an aspect of it that referred to old ideas.

Like I said… :slight_smile:

I don’t see anywhere on that page where it shows that it achieves “better results” than other solutions (whether academic or commercial). Give me something concrete that shows their method outperforms other methods.

It may be the best thing on the planet - but it’s not customary to take their own website as evidence of that - they need to prove it like everyone else - with quantifiable results compared to other solutions.

Without that data all we can do is reserve judgement.

Not dismissing anything.

You implied that Hawkins had some insight into generalization tricks that the people in the linked video made use of, and I was pointing out that the people in your link were building on the work of many many researchers over the last few decades and not on anything from Hawkins.

HTM may end up being a very effective tool for pattern matching and classification, and it may end up beating all other systems. Or it may be generally equal to other methods. Or it may be worse. Nothing is dismissed until we see actual results. The links on their page certainly look promising - that the product does actually work. But until we see comparisons on same tasks, it’s tough to say where it belongs in the landscape of solutions.

On the other hand, the people in your video link compared their method to other methods and gave % differences in accuracy. This is what the researchers do when they are comparing methods, that way you know which approach is best.

As I said, they will die soon if you are correct.

“I’ve worked in the private sector. They expect results” - Ghostbusters.

I do think that as they are new and they are still developing their products time will tell us soon if they are on the right track.

But as they are being used by Forbes and others I would just say that what I respect about them is that they are working with actual products, the way I see it, other approaches can overtake their algorithms and systems and that will be ok for me, it is the goal what matters.

This statement makes it sound like you think their systems outperform their commercial and academic competitors. How do you know if there aren’t any comparisons?

I think chess has been a terrible field for AI, for the very reasons you give here. Chess is a search space problem, at least to a computer, masquerading as intelligence, and besides letting AI researchers win bets with philosophers, soon it soon outgrew its usefulness. Why do grandmasters see the board in a very different way from potzers like me? The question you bring up is another good one. If AI explored this, they might have gotten somewhere. Instead they hill climbed, to use another search space term.

Deep Blue no more thinks about chess than the test generation algorithms we use (another search space problem) think about generating tests.

First off, I’d like to say that I don’t think it’s necessarily fair to say that this isn’t, in some sense, intelligent. It may be a perfectly valid way that an organic brain may treat it, and even if it isn’t, in the long run it could prove useful to provide a working model for intelligence, even if it is inorganic in origin.

However, I also think this raises a very good point (I’m fickle like that). Somewhere AI got confused, it’s a broad multidisciplinary field, by necessity. But for every subdivision that is AI, like those people who made ASIMO, there are a bunch of tiny little subdivisions that I wouldn’t fairly categorize as AI. This doesn’t illegitimize solving chess, go, checkers, pathfinding, or search engine optimization, however it is not AI. Somewhere down the line AI became synonymous to many universities, professors, researchers, and average joes with “the branch of computer science that tries to solve problems that are reeeeeaaaaaaaaalllllly haaaaaard.” I’m not going to argue that travelling problems aren’t hard, they’re damn hard. They take a certain amount of research to solve, and AI techniques can, in good faith, be applied and applied well to finding solutions to these problems. That doesn’t change the fact that it’s an algorithms problem, or a search problem. There are tons of tiny little specialties you can shove this under, hell, we can file some of it under database system problems.

Again, this doesn’t make solving chess, go, or checkers bad in any way. It’s just that classifying them under AI is heavily disingenuous and ultimately dilutes the field into meaninglessness. It also doesn’t mean that these problems have nothing useful to offer, it’s just that many other fields have applications to AI, and the fact that these also do doesn’t make them AI problems.

Simulating a human brain based on the Standard Model of particle physics? You’ve got to be kidding me.

Let’s say there’s N quarks in the brain (where N is some incredibly huge number). Let’s say each of them lives in a space of S single-particle states. Then your brain lives in a space of S^N total states. Of course, most of these states have no resemblance to a human brain whatsoever, but starting from the Standard Model your computer has no a priori way of knowing which states are relevant so it has to consider them all.

So we’re talking N log[sub]2/sub bits just to represent a state in our brain space. That is, we’ve taken our initial really large N and made it larger by a sizable factor. If we imagine our bits are anything even remotely resembling those of a classical computer then even for the most optimistic assumptions about their size you’ll find the mass of this brain-state machine is prohibitively huge.

Now, if you’re imagining this is some kind of sci-fi computer where the bits are quarks, then you could build a brain-simulator that’s only log[sub]2/sub times the size of a brain. Which is still really big, because S will be way bigger than 2. But at any rate building a scalable computer out of quarks during any of our lifetimes is a pipe dream. In fact we’ve still got a long way to go to building a scalable computer out of atoms (despite much active research in this area). And it’s another huge leap to assume that this not-yet-invented scalable quantum computer will develop on an exponential growth curve like that of classical computers, especially when you consider that challenges like decoherence become increasingly severe as the number of qubits increases.

And frankly the assumption that exponential growth will continue in classical computers indefinitely is already on very shaky ground, even before you bring all these other considerations into it.

Edited to add: even if your hypothetical subatomic bits have k internal states you’ll still need to account for the external state, e.g. all the different ways the particles could be redistributed in the brain, so log[sub]k/sub is still much larger than 1.

I was responding to iamnotbatman’s comments, in case that wasn’t clear.

Intelligence is less interesting than consciousness. A two piece Made in Taiwan chess computer has always been able to beat me. It’s more intelligent than me, no news there, but will computers become conscious by 2045? I doubt it.

And you obviously failed to parse them. I have made it clear I don’t advocate the position that exponential growth of classical computation will continue indefinitely. I have said that under the premise that it does, we would have the computational resources to simulate the human brain in the future. And of course we would. Also, no discussion of quarks is necessary. I very much doubt that kind of detail is necessary for any successful simulation of the human brain. But nonetheless the point is irrelevant, since I am working under the premise of arbitrarily large computational resources given the premise of continuing exponential growth.

ETA: my point in this whole matter was to respond to those here who seemed to be arguing that even if exponential growth continues indefinitely we still won’t achieve AI because it’s a software problem. My response: at minimum, with infinite computational resources there is always the brute-force approach.

But the brute force approach doesn’t arrive at AI through understanding.

How would you know how to alter that simulated brain to improve it’s ability to perform math proofs? Answer is you wouldn’t have any clue. Which, to me, feels like AI lite.