You did see in my post that I said “Kurzweil”, right? My comment was directed at Kurzweil’s estimate of 2045 which is the title of this thread.
How can you understand how the neocortex processes and stores information when it’s just now being revealed that the cells that outnumber neurons by 10x are intimately part of cognition but have so far been completely ignored when it comes to computing models?
That’s like saying:
“we have modeled our robot after a human which we understand to have a head, a left shoulder and a right arm”
and then new research shows that humans also have at least 1 leg, and 2 thumbs also
I’m just saying that concentrating on him, when he is not a serious researcher in the matter, amounts to a slight case of straw man argumentation.
And that sounds like “put the fruit back in place, we know what we are talking about!” in a dramatization of the life of Galileo that had a cardinal demanding Galileo to put the fruits of different size back in the fruit bowl and not do the simple table experiment that showed that Aristotle was wrong regarding the effects of gravity on bodies with different weights, and that Aristotle never bothered to do the experiment before.
The point is that the research that includes finding how the brain works and applying it to AI research is ongoing and working in a type of feedback loop, and depending on the new discoveries the tools used change in the way to develop a general purpose intelligent machine.
To understand the complex one simplifies, makes theories and then then experiments to see if the model can be scaled up, throwing the hands up and continue to repeat that “we do not understand it” is not a way to make progress.
Just saying, since he is not so much respected in the actual field where research is concerned, pushing him down when he is already down is of not much use.
As mentioned already, the guys at Numenta.
If we use the silly definitions by Kruzweil, no. But I do think that general intelligent machines will appear even before that.
Ok. I think the point still stands for anyone that thinks we will have human level intelligence by then. If you are saying the entire thread agrees we won’t have it by then, then the point doesn’t need to be debated further.
I was asking who was throwing up their hands, it seemed like you were implying I was doing that because I was pointing out the complexities that continue to emerge.
For example, the Blue Brain project claims to have simulated a rat cortical column. But they only included neurons and synapses.
There are 10x more glial cells, they communicate with each other and communicate with neurons. You can’t really expect to get accurate results when you left out a significant piece of the puzzle, a puzzle that is getting larger every day due to basic neuro-reaearch.
I wouldn’t disagree that we will have made progress prior to 2045 and will continue to add to the list of tools that solve problems in a flexible manner similar to humans rather than hard coding.
My point regarding complexity is directed towards 2 lines of thought:
We can simulate the brain - eventually maybe (and maybe not depending on the level of detail required for accurate results), but the Blue Brain project is not an accurate model in that it ignores so much at the macro level.
Brain-Like or Brain Models - we are so far from understanding how the brain (or even a lowly worms brain) does what it does that nobody can say they have a “brain like” model yet. Numenta has a very high level abstraction that is even further removed than the Blue Brain project. But don’t get me wrong - it may be a useful addition to our set of tools, time will tell - just don’t call it “brain like” if it only matches a small percentage of the attributes of the brain.
Once again, you are indeed insinuating that they should put the fruit back into the fruit bowl.
I feel here that your approach is to continue to insist that the more we investigate the more complex this gets and therefore we will never get to the goal; it is like a Zeno’s paradox, but fortunately in the real world Achilles does get the turtle. As the neuroscience guy mentions, we can progress even when the complexity increases, armed with the new research we can also progress in the general AI field.
We start small and progress from there. And once again, we should not ignore the progress that out tools are also getting.
First off, I found it odd to have recognized the name of the writer of that Time article as the author of The Magicians, a fairly cynical twist on the Harry Potter/Narnia fantasy subgenre. Not sure if it signifieth anything, but of note perhaps.
There seems to be an overall consensus among the posters that Kurzweil misses the point that the issue is not the progress in processing power, but the progress in understanding how to put that processing power to use as “strong AI”, which is not following anything close to an exponential curve. Agreed. Hell, if you added up the processing power of all the world’s interconnected computers, you clearly have more processing power than in a human brain; that does not mean that they are organized in such a way as to create “intelligence” nor does it mean that a “global brain” sentience is necessarily emergent of it. While there is great work being done modeling human cognition (see for example the lifetime work of Stephen Grossberg), we are making baby steps closer to a linear fashion more than approaching some exponential inflection point.
I would take it a bit further however. Commonly “strong AI” is assumed to mean “intelligence that looks like ours”, even to the point that the Turing test (if I understand such correctly) defines it as the ability to convince us it is. I would posit that such is a very narrow and provincial definition of intelligence, one that assumes that any “general intelligence” would look like ours and be understandable by ours. One that assumes that if/when AI becomes sentient we would recognize it as such by the basis that it would act like us. That might be if both an AI and human development had been subject to the exact same selective pressures, but as they are not there is no reason to assume such a convergent evolutionary process.
I would argue that any intelligent sentience emergent of the processing power(s) that we have created, either from a single machine, or emergent from the massive nonlinear interactions of all the world’s computers communicating with each other, would be so alien to ours that** we would not recognize each other as sentient entities**. We might even perceive time itself in different manners. We would exist in the same space with no way of even being conscious of each other’s consciousness.
Thus for all we know “singularity” may have already occurred.
I do think that is indeed pertinent, what I see is that some are attempting to use that as an argument that we will not make any good progress; however, like the history of flight shows, we do very good flying machines even though it is noticeable that many, even in the flying business, are aware that some items are not really understood properly or in totality.
Trying to understand all of the pieces involved in human cognition is “throwing up one’s arms”?
I’m truly confused by this.
What if someone was building a brain with only neurons, no synapses because they didn’t fully understand the importance of the synapses. Is that a good model?
Again, this is my point:
If there are 2 things that can communicate with a neuron and cause it to fire, and your model only includes one of those things, then your model has limited usefulness.
So, given what is clearly being learned about glial cells in the last few years, it would be wise to determine exactly how they interact with each other and the neurons and then include them in the model. Bottom line is that they simply do not have enough information to build a proper model yet and I guarantee you that any real researcher in the field will agree with that statement.
No they wouldn’t have stopped research, they would have continued researching as they are doing. That’s what you do when you don’t understand something fully.
Why would you say they would stop research because I pointed out that there are no brain models currently that are accurate? The neuroscientists all know that anyway, I didn’t make it up.
Read my posts.
Here is a summary:
2045 is not realistic
Much about how brain functions is still being learned
There are no current computer models of brain that account for everything that scientists already KNOW are part of cognition, therefore, none of them can accurately reproduce even a section of the brain
As DSeid pointed out and I don’t remember if I did or not: our increase in knowledge regarding the biology/physics/chemistry of brains is at a much slower rate than our increase in computational power
Scientists are making progress
Numenta may have a tool that is valuable, but in no way can be considered “brain-like” any more than other neural networks
There will be progress from research that is biologically based and there will be progress from research that is mathematically based - both will play a part
I think we will get there, eventually
None of this adds up to me saying “let’s stop, it will never happen”, I don’t know why you keep insisting that.
Seems like it would make sense to explore this and see if we can try to describe something that we consider “intelligent sentience” that we may not have considered at first.
Aye, there’s the rub. And if I undersand correctly, was the point of the Turing test. I have no way to even really test and know that other humans are sentient as I am. I make that assumption because they behave and look like I do and tell me they do and I know that I do so I believe them. But there is no way to measure or directly observe the subjective experience of sentience that they are having (even if I was able to come a good neural correlate of consciousness - NCC - that I could measure).
To some degree that’s why (as I think was pointed out early in this thread) the metaphor of “singularity” is more apt than Kurzweil appreciates: we have no way of knowing or measuring what’s on the other side of the event horizon, at black hole singularity, and likewise we have know way of knowing or measuring what machine sentience would subjectively be like or if it is being experienced. (We can’t even know how a bat experiences sonar, and that’s another mammal with a very similar machine!)
The point of figuring out what NCCs are, and mostly Douglas Hofstader’s point in I Am a Strange Loop, is that if we can come up with a definition that is not based on superficial observable behavior similarity, but instead on essential information processing characteristic similarities, we may at least have a somewhat less narrow view of what might be experiencing sentience and a better chance at identifying a sentience alien to our own.
(BTW, I remember an old science fiction story the punchline of which was that two species of humans were existing side by side, one functioning in speed orders of magnitude faster than the other. Neither was able to interact with the other or experience the other as “conscious”. The fast version saw the slow humans as statues frozen in time and the slow versions experienced the fast ones as flickers of light. Such would be true if humans, even with the same brains otherwise, operated with different processor speeds - how much more so if an alien sentience had a different processor as well?)
Well, our current progress in understanding might be exponential if you’re talking about the long, long flat left end of the curve before progress exceeds linear by a signficant degree. IOW if total ignorance is “1”, then our current position might be 1.0000000012, a figure we’re squaring every five years!
It is indeed thanks to the lack of a proper theory of how the brain works plus also the problem of how to properly make a system that would follow that theory.
IMHO we are, finally after so many years of beating around the bush, getting closer to practical theories that put the mess of information that we already have about the brain in context.
However, AFAIK we are not really that ignorant about the brain, if there is one thing that I do agree with the new researchers, is that we already have indeed lots of data about the brain, its structure and many other items, but we did not have good practical theories about how even some parts of it does work.
Within the context of building a functionally accurate simulation of the brain, then it is absolutely a major stumbling block and there are a bunch of them. Guaranteed you will not find any neuro-scientist that thinks otherwise. You may find computer scientists or physicist or philosophers not trained in and not doing neuro research that think that, but nobody that properly understands the number of open questions thinks we are at the point where a proper simulation can be built.
However, within the context of building AI without trying to build a functionally accurate simulation of the brain then it is not a major stumbling block to accurately understand and mimic the low level operations of the brain.
I haven’t read the entire thread, but I agree with those that say a singularity is not likely within the next few decades, and a sentient-level AI could be a technological wild goose chase.
But how far away is weak AI - functionally equivalent expert systems - say the Emergency Medical Hologram, version 1.0? It has no consciousness, but has all the medical knowledge and diagnostic skills of an entry-level physician or nurse, and has a rudimentary ‘avatar’ interface that can interact with patients or real doctors primarily though voice commands - i.e. no keyboards or mice, at most a pad computer to access readouts and call up specialty subsystems such as oncology, neurology, cardiology, etc.
I would be surprised if this was not available by 2045, and to the general public, that would be Star Trek level AI.
I could imagine each office, if not every person, having a personal ‘avatar’ assistant by 2060 at the latest and possibly much sooner. So perhaps not in my lifetime, but definitely my nephews.