Do we even know what consciousness is at all?

I agree that artificially creating consciousness has limited practical value in itself, but an understanding of consciousness would be absolutely revolutionary. It would open the door to applications that would massively change (and enhance) how we all live our lives. Day 1 could be non addictive pain relief and no more dangerous anaesthesia, and that’s the thin end of the wedge.

Plus the study of consciousness may give us there insights to make more intelligent devices (while avoiding making them conscious).

But creating consciousness as an emerging property – assuming we could and that it happens – does not mean that we understand it. Designing for consciousness of course requires that we know what we are aiming for, but current deep learnig AI is often described as a black box for a reason. If one of those AIs becomes conscious we will not know how it happened.
We would at least know that it is possible, which is a lot.

Sure, it’s possible we create consciousness and still have no understanding of it (similar to having children…)

But likely replicating any aspect of consciousness in a computer or other machine would make studying it easier and bring important insights.

As hard as it is to reverse engineer deep learning, it’s still easier than trying to untangle 100 trillion connections in a pile of mush. Particularly since, as I say, an artificial mind only needs to replicate any aspect of consciousness on any level to be useful. Not the whole shebang in one go.

‘Biological’ may turn out to be a very broad church. There may be self-replicating phenomena inside the metallic hydrogen oceans of gas giants, or in the crystallising hearts of white dwarfs. It may also be possible to create self-replicating microscale robots, or self-replicating autonomous spacecraft. If biological is expanded to include such ‘organisms’, then Searle’s (rather arbitrary) restriction of ‘biological only’ could include sufficiently sophisticated robots, and sufficiently sophisticated crystals.

In short, it is nonsense.

Searle’s point is not so much about biological “wetware” being essential, it’s about the process of computation being insufficient to explain all that the human mind does – he posits that there are things that the brain does which, while completely physical / material, are not computable.

He would agree, and I think he’s stated, that there may be many substrates that could host consciousness.

Afaik there is not much if any support for this conjecture, but your objection doesn’t refute it.
And the consensus on strong AI is that it is disputed / unknown.

Searle seems to be saying that a robot can be conscious, but only if it supports the mysterious ‘non-computable’ element in its nature that allows it to be conscious. It is like saying ‘a robot can’t be conscious, except when it can’. Is that not a circular argument?

No. He’s suggesting that conventional computers, which are emulatable by a Turing machine, cannot do everything which a human mind can do. That’s all.

If you think that this, even if true, is not a showstopper to whether we can make artificial consciousness, I agree and I think Searle would agree too.

But I think it’s still a valuable conjecture, because it’s good to question our assumptions and the truth of Strong AI is an implicit assumption for many people. One which, it turns out, rests on questionable assumptions itself.
Like I say, few philosophers or neuroscientists feel Searle’s conjecture is likely to be true, but most concede that Strong AI is not certain.

Well, that’s why I recommend inquiring into relatively primitive non-human minds. If the requirement for consciousness is some specialised architecture, then we are more likely to understand it by examining, and replicating, simple brains first. It is not going to be something we solve tomorrow.

Even when we finally create computers as complex as a human brain (in a couple of decades from now) we will be nowhere near solving the architecture problem.

But, will we know that it happened? How will we be able to definitvely establish that it is conscious when I cannot even be absolutely certain that you are conscious? We have to somehow be able to measure/observe consciousness first.

 

White dwarfs are believed to be gaseous. It is a ridiculously dense gas, right at the threshold of EDP, but it still behaves like a gas.

That’s a very old philosophical position, of course, perhaps most forcefully argued by La Mettrie in the 1700s. There are modern versions of this, like Frankish’s illusionism, the eliminativism of the Churchlands, or arguably Dennett’s multiple drafts, but with the development of more modern philosophical arguments, like the knowledge- and zombie-argument and others, I think it’s become very hard to maintain. Moreover, it’s troubling that it starts from a rather unscientific premise of questioning the data that seems not to fit with one’s predilections. It’s also rather questionable whether it actually makes things easier: instead of explaining subjective experience, it ends up having to explain how we have the appearance of subjective experience when it denies that we have any such things as ‘appearances’ in the first place, so there’s always a threat of empirical incoherence.

If all of your thoughts are conscious, then I think your experience must be radically different from mine. As noted above, 4/5 of the brain’s neurons are situated in the cerebellum, carrying out all sorts of complex information processing related to muscle control and motor coordination, none of which ever leaves any mark in my conscious experience. There is lots of information processing going on entirely in the dark, and the question of how some of it can lead to something like a subjective, qualitative experience, or refer to things outside of the mind—or at the very least, yield the appearance of those things—is not something easily dismissed.

Kind of, but that makes it more interesting.

A new form of matter, with properties that are as yet unknown.

I was very dismissive of the philosophical zombie concept until recently. My argument always was ‘if it walks like a duck and quacks like a duck, it is a duck’; if an entity behaves exactly like a conscious entity, it is a conscious entity.

However the development of LLMs and so on recently puts that in doubt. In a few decades there could be entities that behave exactly like conscious entities, but have no true consciousness and feel no qualia. If we could make such entities, would we have a moral obligation to treat them as if they were conscious? Or could we use them as tools and place them in harm’s way, as soldiers or as disposable rescuers which could be ‘killed’ if necessary?

Some factions would probably love to make competent, disposable soldiers.

I’m questioning because of the lack of data. And definition. There is nothing that is so “it is what it is” or “I know it when I see (feel) it” as a quale.

I never said all thoughts were conscious.

As for a subjective experience, what else could it be unless all our brains were identical and had been subject to the same experiences? Why even include that in the non-definition of qualia?

And yet, they’re the most intimately familiar and directly accessible entities imaginable. Whenever you say you have data about anything, what you really have is a bunch of subjective experiences—say, of measurement apparatuses beeping or indicator needles pointing—that you then interpret as pertaining to some object of study. Your primary data are always subjective experiences, qualia; everything else is derived from that. To question the first link in the chain is to undermine the entire chain. Not saying it can’t be done coherently, but I’ve certainly never seen it done.

That our mental phenomena are hard to define, hard to characterize is because they’re the things that definitions and characterizations are in terms of; they’re the ground on which all of that rests, and carelessly excising them pulls out the rug from under our feet.

It’s kinda like saying because your finger can’t point at itself, there’s no finger: if that’s true, it becomes hard to explain how you point at things at all.

So what’s to you the difference between conscious and non-conscious if there’s no difference between phenomenal (i.e. associated with qualia) and non-phenomenal?

Exactly. Again, what on earth alternative is there to subjective experiences? We have no other type of experience. How are qualia different from anything else we experience?

The lack of accessibility. It is the difference between a horse and a unicorn.

…qualia are what we experience? A quale is just an instance of subjective experience. Qualia are what make up our phenomenal experience.

Sorry, I don’t think I understand what you’re trying to say. Could you rephrase that or elaborate?

Which is what I am saying. What else would make up our experiences, which are all subjective? They are our thoughts. Whether it is our dullest most ordinary thought or the object of our desire I have seen no reason to think there is any difference in our thoughts that could distinguish them as qualia from non-qualia.

We know what our conscious thoughts are. We don’t know what our unconscious thoughts are. Like unicorns we can only imagine what they are. I don’t think we need to understand unconsciousness to understand consciousness to some degree. It probably would help to understand what unconscious thoughts are, not saying they are unrelated to consciousness. At the same time I don’t see a reason for a necessity of unconscious thoughts in order to have consciousness.

It is like the Turing Test. I am not sure I would pass that either if you were the judge. And we don’t need to define “human” for the Turing Test, do we? Human is only what convinces us it is human.
You can’t be certain that I am conscious. You won’t be sure the machine is.

This sort of behaviorist stance isn’t really what the zombie argument targets, though. I don’t think the development of LLMs should have anything at all to do with one’s acceptance of the zombie argument: it applies to exact physical duplicates which we can, according to the argument, imagine lacking any sort of conscious experience. Take for instance your reaction upon stubbing your toe: nerve conduits light up, c-fibers fire, a certain behavioral cluster—swearing, jumping up and down on one leg, screaming—is activated, but there doesn’t seem to be any reason for that to be accompanied by any sort of experience of pain at all. Like most physical chains of causality, it could just as well happen entirely in the dark, it would seem. Thus, the physical facts of the matter don’t seem to necessitate any experiential facts—they appear to be extra, ‘further’ facts.

But then, in what sense do you say you ‘dismiss’ qualia when you seem to agree they’re what make up our mental phenomena?

I don’t see how these go together. Unconscious thoughts just are those kinds of information processing that we don’t have any experience of—that aren’t associated with any qualia. They’re distinct from those with qualia exactly in not being conscious.