None of which has anything to do with consciousness. Nor, I believe, would Hawkins claim so.
No, because nobody’s doing that. People are merely pointing out that we don’t understand what happens there. You’re setting up a false dichotomy, that everyone who dares point out that there are a lot of things that we don’t understand, that don’t seem to fit within the established paradigm, therefore postulates something irreducibly mysterious or ‘magic’. But the real discussion is considerably more subtle.
Missing the point, his point is that there is no mystery that can stop us from getting there.
And that is my point, I do not think there will be a problem or anything that will forbid progress towards artificial intelligence that is also conscious.
Well, I think you made the point early about being convincing, this is not really. I have already say that philosophical speaking you are ok, but I’m looking at more practical issues, it is really more likely that once we approach the Galileo point of reaching for the fruit bowl that only the not amused look of the cardinal will remain.
Wouldn’t a p-zombie drone, “Illusions… Illusions…”
OK, but what, if any, are his arguments for this position? How does he propose consciousness arises? Is it just, if you build it, it will come? Somehow?
[quote=“GIGObuster, post:164, topic:700348”]
And that is my point, I do not think there will be a problem or anything that will forbid progress towards artificial intelligence that is also conscious.
[/QUOTE ]
Well, it’s fine to think so, of course, but the point of this discussion is—how? And it’s not clear by far that progress in
AI will address this—it may be, for instance, that we succeed in creating a behavioral isomorph of a human being, say by direct brain simulation, and still don’t know how consciousness works, or even if there is any.
The end goal is not to create a conscious entity, but to understand how consciousness works!
I think we may be talking past each other.
I’m not talking about positing additional entities. I’m saying that when it comes to subjective experience, pretty much everything we have has come up blank. As much as our understanding of the brain and behaviour has advanced, still subjective experience is a black box.
So it’s a good thing that some people are trying out alternative models than rigidly sticking to a reductionist account. Maybe they’ll all come up blank too. But it’s worth a shot.
After your back and forth with Half Man Half Wit, I see that boils down to Hawkins’ own opinion is that he’s on the right track.
I’m not talking about leaving anything in the bowl. I’m saying science doesn’t need to make that kind of philosophical declaration. Nothing is to be gained from it. And potentially you could close lines of enquiry, the very thing you’re warning against.
[quote=“Half_Man_Half_Wit, post:166, topic:700348”]
OK, but what, if any, are his arguments for this position? How does he propose consciousness arises? Is it just, if you build it, it will come? Somehow?
It is more complicated than this reports, but this is a basic early model:
http://bits.blogs.nytimes.com/2012/11/28/jeff-hawkins-develops-a-brainy-big-data-company/
Not so as IBM, DARPA and many others that looked at his theories and product so far can tell you.
Not really, even if this is successful it does not mean that anything else will be shut down. That it can reach the levels of importance like old philosophical questions like “Can several angels be in the same place?” That could happen.
And well, it talks about things like visual perception, pattern recognition, etc., and then all of a sudden jumps the gun and calls it ‘a model of consciousness’. But how things like visual perception and pattern recognition give rise to consciousness is exactly the problem! To just go ahead and baldly assert that it does simply doesn’t further the discussion.
I think that you and Hawkins are really trying to solve a different question—which is fine, but it’s important not to fall prey to possible confusions. Hawkins wants to be able to build something that is ‘conscious’ in the ordinary sense, i.e. that is capable of acting as if it were conscious. But the question under discussion here is exactly how consciousness (in the somewhat more technical sense of there being something it is like to be a certain entity) arises from non-conscious matter, and in a wider sense, how that then relates to behaviour.
Consider the following. We’ve been discussing about how flight for heavier-than-air objects works. We know it does work: there are birds. But we have no theory of flight, and are considering several possible explanations. In come you informing us about the groundbreaking work of Geoff Falcons, who thinks he is pretty well on his way in solving the problem. So we ask:
– “Well, so how does he propose flight works?”
– “He’s build a model plane, and if he throws it, it flies!”
– "OK, interesting. So how does it fly?’
– “Like this!”
– “But what keeps it up?”
– “Well, its wings, I suppose.”
– “But how do they do that?”
– “Like this!”
And so on. Even if Hawkins succeeds in building a conscious machine, he’s just created one more instance of the problem, but not gotten us any closer to a solution. Do you see the difference?
There may be an account of conscious perception in terms of physical processes, or even of computational ones, but it’s still outstanding to give that account, or to even firmly establish that it is possible. When I ask about how a computer monitor produces a picture, I don’t want to be shown a computer monitor producing a picture, with a vague handwave of ‘here you go, that’s how’, I want to be explained the chain of causality that leads to pixels lighting up on the screen; it’s the same thing when asking about an explanation of consciousness—I want to be explained how things come to feel like something to me. To just handwave away the problems in giving this account accomplishes nothing.
Don’t see why this is not part of the solution. I think calling it “one more instance of the problem” is to demand that indeed we should forever discuss about the angels dancing on a pin. And I already acknowledge that we do not have it yet, the point stands, there is no reason to assume anything mysterious on the way to create a conscious machine.
Don’t think so, the fact that someone can not understand how a car works does not deny that others can design it and build it. People can indeed accomplishes a lot by not demanding others to know it all.
And I do think a lot can be learned by reading the Chilton manual, or in this case what has been published by Hawkins and checking the open source tools offered.
Sure there is a lot to learn, but I think that is better than just thinking about a philosophical construct that according to the Stanford Philosophy dictionary has more philosophers now not agreeing with the idea of the Zombie.
The thing is, we already have a way of producing conscious entities, and it’s a lot of fun, too. But we’ve still got no idea regarding how consciousness works. Being able to duplicate something does not entail understanding it.
That’s not to downplay Hawkins’ research, it’s very interesting and important in its own right. But it’s really a (proposed) solution to a different problem. This might just be ‘angels dancing on a pin’ to you, but it’s just as legitimate an area of interest as any other. Personally, I don’t care much for mountaineering, but I don’t go around telling people they should stay on the ground. Nobody forces you to participate in the debate.
But I’d like to point out that for some of us, the answer to this sort of question is indeed of great importance: it decides how we are to morally treat certain kinds of entities that might be possible in the mid-term future, such as e.g. autonomous artificial agents. Whether they are to be treated as persons, with the rights accompanying such a judgment, or one can use them as cheap, intelligent slave labor, is a question that might hold genuine real-world significance sooner or later; thus, writing off the questions surrounding this issue as immaterial sophistry and illusion has the potential of landing us in hot water later on (which is, of course, quite a well established pattern in the course of human civilization, regrettably).
We already exploit vast numbers of living entities by domesticating and eating them, and many of those entities are far more conscious than any computer yet built. If we are going to start looking at sentient rights in the future we should be aware that this would not only affect the rights of robots, zombies and AI, but also cows, sheep and packhorses.
The point was that just because one does not understand it it does not automatically follows that what the auto makers are an ignorant bunch.
No, no one, but where I’m coming from is close to what Lawrence Krauss realized after realizing that the evidence he found about the universe ran into many philosophical issues of the origin and nature of the universe.
And this just ignoring that many that think that the philosophic zombies are silly do have ideas already of when to treat those new beings as having rights. The refusal to see this is one big mistake as what I do think is that in reality there will be a lot of philosophers (and that includes religions) that will not use reason nor evidence when deciding when and how we should grant those rights.
- As an aside, you actually ran into one big reason I’m into this discussion, I have several ideas and research that I did and doing as I someday want to write/draw hard sci-fi that is very related to this. Besides caring for what it will happen to my fictional characters I do care what will happen to the real ones when the issue will ran into being more real. Asimov’s main fear was that many humans would be afraid and opposed to them with no good reasons, looking at how several soldiers in the ME showed empathy for what they are very simple automatons that dispose of bombs led me to think that it will not be as bad as Asimov feared.
Definitely. If it’s a machine – and if we’re horribly cold-blooded enough – we can perform carefully calibrated experiments of controlled destruction upon it.
We can start getting reductionist. What if I remove this bit? What if I remove that bit? Oh, look, it can’t answer certain questions anymore. Nifty; we’ve isolated a link to some points of significance.
(Obviously, a lot of this would have been accomplished during the building of the machine in the first place.)
What we have to do with humans in a haphazard fashion – waiting for accidents and strokes, or experimenting with chemicals – we could now do in a highly controlled and repeatable manner.
Hey! Stop reading my notes!
I have to note here that Hawkins goal is not to do C3PO but what one could call brains in a box. Where I do think he is limiting I do think that others will look at the scenarios you are mentioning and the need for non human experimentation subjects will lead us to the androids that will run into the issue at hand.
BTW Trinopus, I do think we will be cold blooded enough, but that is because there are many problems (of a psychological nature even) that artificial beings can help us solve. And reduce the time we spend on developing cures.
My solution for the “cold blooded” issue regarding the gruesome experiments we will be able to make is that unlike humans (sorry you ‘downloading to an artificial brain’ trans human fans) the transfer and copying of their data/models/consciousness will be easy for the androids. I will not be surprised that it will be the bioroids the ones that will suggest -what for us- will be the most gruesome experiments that will be done for the common good.
There’s several sentences in your post that I’m having some trouble parsing, perhaps you could clarify. If I understand you correctly, then you’re still addressing the wrong question—I’m not interested in building cars, I merely want to know how they work. And even with a functioning AI in hand, without anybody coming up with an objective test for conscious experience (which strikes me as an oxymoron), we can’t even tell if there is any there, much less deconstruct it for study.
Well, let’s just say I’ve deplored Krauss’ philosophical innocence often enough on these boards to not want to go there again; but basically, he wrote an entire book on a topic he didn’t even understand well enough to know how to frame. He could have benefited greatly from just a little study of those philosophers he so readily dismisses, and could have saved himself quite a bit of embarrassment. But it’s unfortunately a sort of occupational hazard for physicists to consider themselves experts on everything, whether it’s the Hawkings, Weinbergs, Feynmans and Krausses that chide philosophers from a position of deep ignorance, or Michio Kaku waxing nonsensically about evolution, or whatever else. I try myself not to succumb to it.
If you’re interested in discussing the value of philosophy generally, then I’ve got a thread for that (which also touches on Krauss a bit). Luckily, in recent years, at least a couple of prominent physicists (most recent examples are George Ellis and Carlo Rovelli) are starting to realize that especially in areas of fundamental epistemological and ontological relevance, collaboration with philosophers is in general a more fruitful approach than ignoring their contributions. Nevertheless, I still have to be careful to whom I confess having an interested in philosophy, rather than the scorn habitually heaped on the profession by many physicists.
For example? Most of the ideas I know of are behavioural in nature; but I’ve already elaborated on how that’s insufficient.
But of course, this necessitates knowing that the machine is actually conscious. How do we do that?
That was the premise. You said, “Even if Hawkins succeeds in building a conscious machine…”
Perhaps, when we know a little (or a lot) more about consciousness, we will have a ‘consciousness-meter’ that can easily detect the difference. I doubt it will be that simple. It seems more likely to me that we will eventually have to defer the argument, and assume that all beings which act as if they have consciousness actually have it (or that none do).
How could you test for consciousness in another entity? You could never share the consciousness of another being without becoming them. Perhaps this is the only possibility open to us; if we really do believe that we ourselves are conscious, then by sharing that consciousness with another entity completely, so that two entities become a single mind - maybe that would be enough to determine whether the other mind were conscious. Quite how that could be done is another matter.
I dont think that simply connecting two brains together via peripheral links would be enough; you could connect one person’s vocal centre to another’s auditory centre, and vice versa; the two could soundlessly whisper to each other to their heart’s content, but this would only be equivalent to a conversation involving vocal chords, sound waves and ears. To completely integrate two minds into one would require much deeper connections. But if it could be done, then you could be assured that any other entity that you were integrated with would share the same status as yourself.
This seems like technology that is many centuries off, if it is ever possible; but this sort of integration might be relatively straightforward for artificially intelligent entities. AIs might routinely merge and split appart again to audit each others state of consciousness, something we may never be able to do. It may even be the case that our isolated, non-auditable status will make us the poor relations of mindkind.
But this is reaching for an argument from ignorance, AFAIK the effort of looking at the brain to then make effective AIs is getting results, BTW you are not getting the metaphor of the cars, you are actually demanding that experts in other fields should not bother making useful things just because you do not understand them.
I actually read those and they lost me the moment they used the examples of “Newton looking at philosophy before”, the problem is that he also wasted a lot of time with alchemy and the worst thing was that he hid any progress and most of the failures he had there.
No convincingly.