The Blank Slate

Good, I’m glad it fit the bill. The only additional thing I’d posit in response is that “self-awareness” is similar in kind to “awareness” (at least, as I described it earlier) – just at a different “level” (in quotes to signify that it’s not at all clear right now what comprises a “level”).

Sentient

bah. We’ve been talking past each other again. I was assuming (not unreasonably, I think) that everything we’ve discussed since post #79 was, directly or indirectly, ultimately in reference to human subjective awareness.

Look, here’s the problem, boiled down to the best of my ability:

Light enters your eyes, sound waves enter your ear and we can (in principle) observe the resulting complex cause and effect cascade of neurons firing, glands excreting, muscles twitching and lips moving. And that’s it.

So where’s the awareness? As you say, the only way to find out is to ask, and even then, according to the heterophenonomenologists, the best you get is an abstraction; a story about what the subject believes about their awareness. Heterophenonomenology remains agnostic as to whether awareness really is how it seems to be to the subject.

But that’s not the issue here. The issue isn’t whether awareness is what it seems to be, but that there is any awareness, any “seeming” at all.

What physical purpose does awareness serve? You say that “*Not being able to physically measure" something directly is irrelevant - so long as there are still physically detectable consequences… *”. Well, that’s the question: What are the physically detectable consequences of awareness?

If neurons can process sensory information in ways that result in physically-detectable behaviors without awareness, what physically detectable consequence does awareness add?

Even if awareness does have some physical effect, how could we tell, even in principle, if that effect is different, or even separate from, the effects we already observe, since there’s no third-person way to measure (in a given brain) a difference between neural “noise-activity” and the “unbreakably-encrypted activity” associated with awareness?

That’s probably a good idea, and I think I should have the time. (I’m waffling because a couple of work-related projects have gone vampiric on me: twice now I’ve driven a stake through their hearts, dusted off my hands and walked away, only to get an email informing me they’ve risen from the dead and are heading straight for my throat.)

Like I said, the compression-rarefaction of the air comprising an utterance “yes, I was aware of that”. (Again, that physically detectable consequence can come from all kinds of apparatus, so we have to design our experiments carefully for them to be useful.)

Different answers in heterophenomenological (HP) tests (eg. frigid versus frightening in that example I linked to).

Ignoring third-person measurements (a la HP tests), who said there has to be one? My point is that computational encryption explains the difficulties in such measurements, and so an absence of a physical telltale which distinguishes aware activity from non-aware activity is no more serious a flaw in cognitive science than not being able to measure, say, the life itself in biological science, the Uncertainty itself in quantum mechanics or the climate change itself in climate science. Hypotheses in these sciences are still falsiied and verified based on things we can measure.

OK, give me a couple of days.

Exactly. So what HP test design would allow one to determine whether or not an apparatus was aware solely by a compression-rarefaction of air comprising an utterance “yes, I was aware of that”?

What? The subjects in that hypothetical HP test didn’t give different answers, they gave the same answer: “I do not remember being aware of the stimulus”. In the HP “frigid versus frightening” test, A and B are both examples of a subject reporting a lack of awareness. How does that show me what physically detectable consequence awareness adds?

If there’s no physical telltale to distinguish aware activity from non-aware activity, what is the justification for deriding panpsychism as ludicrous, especially when we know that some activity is patently aware?

Looking forward to it (we can move this over there or just start fresh, if you prefer)

Solely? Who said there was one? That’s like asking for a single definitive test for life or climate change or something.

No, the different answers I referred to were the statistical differences over many repetitions between the cases where fri- had no “cold” precursor (ie. nothing to be aware of), the cases where it did have such a masked precursor (they say “not aware” but clearly there is some statistical deviation which needs explaining), and the ones where the precursor wasn’t masked at all (“I am aware” and the statistics are different, too). Again, the statistics would show interesting differences in the answers in the three cases.

… only by ignoring HP tests, remember …

Do we? How, exactly? Understand, I’m not feigning anaesthesia here, I’m just asking you why you’re asserting so. I’d then ask whether it was just as “patent” to you that other activity was not aware. Really, the justification I present is only to parallel other sciences, in which the elan vital or the like are clearly ludicrous (as I’d hope a clever chap like yourself would agree).

Done. I don’t think it’s worth carrying this on there, however - the stuff there is rather more fundamental than the IMO unfocussed free-association here, so I’d suggest we stop this one. I leave you the last word here.

DSeid – in reference to your question about computer creativity, I just came across this article about John Koza and genetic programming in which you might be interested. Not overly detailed, and not necessarily what you had in mind, but good for a quick read.

Thanks. Interesting. I certainly agree with his belief that "revolutionary ideas don’t come at random but are ‘new combinations of fairly standard parts with which we’re already familiar.’” and the approach shows how a machine solution to a demand of intellignt behavior may be very different than that used by the human brain.

The article also touches on the different intents involved in AI research: some are more interested in modelling the human brain and use computer AI as a way to explore human cognitive function; some are more interested in producing different aspects of intelligent output, that may or may not be the same character as human output, and may or may not use human cognitive processes as a model. These intents may overlap but are quite different nonetheless. The latter can look to other models as well: evolutionary selection; swarm intelligence; octopodii; etc. None of which may give insight to human cognitive function but may give quite impressive results.

Thanks again for thinking of me.