At what point does one have the Brain of Theseus?
Just my WAG, but I think the ability to experience data verses process data via a long algorithm would fundamentally change the type of answers produced. As the article discussed, we don’t know what process creates consciousness. Our best guess is if we simulate neurons and increase their number and frequency, “consciousness” will naturally arise. It might or it might not. We don’t really know.
Have you read a story called Learning To Be Me by Greg Egan? It makes almost exactly this argument.
Two unrelated comments.
(1) Certain behavior is thought to correlate with “consciousness.” (A simple example is curiosity when seeing one’s self in a mirror.) When a machine is similar enough to us to exhibit (without special programming) such “conscious” behavior, then it will be conscious.
(2) It is an exaggeration to say we already understand the behavior of neurons and can duplicate them. This was brought home to me by reading, in Physics of Life Reviews, “Consciousness in the universe : A review of the ‘Orch OR’ theory” by Stuart Hameroff and Roger Penrose. (Penrose is widely ridiculed for his claims; I wonder if those ridiculing him have read papers like this one.) Several organic processes, including photosynthesis are now known to rely on quantum effects rather than classic chemistry. (Sorry: No link to the Hameroff-Penrose paper — when I find an interesting paper on-line I don’t bookmark it — I download it! Quick Google didn’t find it. I’d do a more careful Google if I had evidence of significant clicking on my links.)
Define “quantum effects rather than classical chemistry”. Ultimately, everything is quantum effects. But despite what Penrose seems to think, that doesn’t mean that it’s impossible to explain or describe them.
And I haven’t read all of Penrose’s claims, but the ones I have read have been sufficiently nonsensical that I see no need to review the rest of them.
I should have known a tautologist would show up! 
If your goal is to learn, you might start with Al-Khalili and McFadden’s book — it sounds like you haven’t read it. If your goal is to goad me about my terse phrasing, I’m not playing.
A simple indication that a chemical reaction may be dependent on quantum tunneling rather than classic chemistry is temperature dependence. Classic models often predict reactions slowing at lower temperatures, but this is not observed in some organic processes. For more complete or interesting discussion, read the Al-Khalili-McFadden book: I’m sure you can post a better summary than I ever could.
Simple: Classical Chemistry is all the stuff Pauling used as the basis to figure out The Nature of the Chemical Bond. Quantum-based Chemistry is all the stuff that Pauling’s work explained. Or is it the other way around? ![]()
Chemical reactions that depend on atoms being non-localized.
‘Known to be’ is I think putting it rather strongly. There is some promising evidence, but the matter is far from settled. The question of how to decide whether a system profits from quantum effects is surprisingly subtle: just because you haven’t come up with a classical mechanism for something doesn’t necessarily mean it must be quantum—you could also just not have had the right idea yet.
That’s why Penrose’s model is considered by many a leap to far: first, his justification for requiring new physics—that the human mind can ‘see’ the truth of certain undecidable statements, and that machines fail to do so—is generally thought to simply be erroneous. Second, his proposed modification of quantum mechanics in order to accommodate such ‘new physics’ simply leaps beyond anything that evidence, up to this point, could justify. Third, whether there are coherent structures in the brain taking advantage of any quantum effects is just wildly unclear at this point.
So his model is essentially wild speculation—although admittedly brilliant speculation—in order to account for an issue that practically everybody thinks doesn’t exist. If it were basically anybody except Penrose proposing this, you’d never have heard of it, because it would lie ignored in some dark academic basement.
The Al-Khalil-McFadden book is very convincing. Have I been gulled again? :eek:
Here are some of the arguments the book makes for quantum tunneling effects:
[ul][li] The substitution of deuterium for hydrogen has a big effect on certain biochemical reactions. The chemical behavior of [sup]2[/sup]H is very similar to [sup]1[/sup]H, but a deuterium nucleus has much more difficulty “tunneling” than a proton does.[/li][li] The extremely high efficiency of the initial phase of photosynthesis doesn’t conform to classic models: Most of the excitons should dissipate thermally.[/li][li] Thermal energy is necessary to many chemical reactions, yet some of the quantum suspects proceed even at very low temperatures.[/li][li] Luca Turin speculated that the rotten-egg smell of several sulphur compounds is due to the 76-terahertz vibration of an H-S bond and an effect called inelastic electron tunnelling. He theorized that borane’s H-B bond, with a 78-terahertz vibration, might smell similarly. But borane is exotic and hard to come by. When borane’s smell was finally determined? Rotten-egg.[/li][li] The magnetic field detected by birds et al is so minuscule that its detection seems almost magical. Some attribute the sensitivity to an entanglement effect I won’t try to summarize.[/li][/ul]
Even if there are quantum effects instead of classical chemistry (whatever that is supposed to mean), what reason is there to believe this functionality cannot be simulated by a machine? What magical properties would there be that can’t be simulated, and why would this be a necessary component of consciousness, self-awareness, intelligence, or any other aspect of the human mind we are discussing?
As I said, there’s some promising evidence, but to consider the matter settled as of yet would be very premature. In particular, look at things like your items two and five: essentially, these are statements that certain processes are very hard to explain using classical mechanics. But for both of those, there’s also no precise quantum mechanism known (to the best of my knowledge, anyway), although there are some proposals, and furthermore, just that nobody’s come up with a classical model doesn’t mean there isn’t one. I don’t know if you’ve followed the back-and-forth on D-Wave’s alleged quantum computer, but there, the pattern for a couple of years has been that they produce some paper claiming to outperform classical algorithms, and then some other group produces a classical algorithm equaling their performance.
However, I think there’s much more of a case for significant quantum advantages in biology than in the D-Wave machine; it’s just that actually verifying such a claim is very hard, and as of now, the science isn’t in yet—in particular, I would like to see some unambiguous signature of quantumness—coherence, the witnessing of entanglement, whatever—in those actual biological systems as they are in the animal (not, for instance, cooled down far below zero), together with a precise mechanism regarding how that quantumness is exploited, before I’m fully convinced.
Well, Penrose’s model would entail non-computational effects; thus (although I have to say I don’t know if Penrose makes this point), if our capacity for explanation, for model-building and so on is essentially algorithmic in nature, then we should not be surprised to find features about our conscious experience we can’t explain—an ‘explanatory gap’, so to speak. Then things like the famous ‘Hard Problem’ of phenomenal experience would not require us to revisit our commitment to naturalism, and we could do without certain exotica, such as dual aspects, panpsychism, or even outright dualism. (I believe, however, that the same benefit—if you think it is one; the position has been derisively labeled ‘new mysterianism’—can also be had much more cheaply, without revisions to the fundamental quantum formalism that there seems to be little call for.)
So in the end, quantum (or post-quantum) effects aren’t supposed to supply the ‘magic fairy dust’ of consciousness, but, at best, help explain why the whole thing seems so magical in the first place, while essentially being a perfectly ordinary part of the natural world.
This is an intriguing answer. Not having read Penrose or the book mentioned above, it doesn’t seem to me that quantum effects are necessary to explain the weirdness of subjectivity, nor is it sufficient to do so. But I do think that’s the right approach: trying to explain the* appearance* of weirdness in what is certainly a mundane physical process.
For anyone who doesn’t see weirdness (at least epistemic weirdness) in subjectivity, I don’t know what to say. Either you’re missing something obvious, or the rest if us are.
Yes, that’s the classical functionalist example
But this is a bit of a misrepresentation.
Let’s talk neural networks for a second here. We “train” the network by having a “mini-brain” or some independent observer tell the network “you’re doing good, keep doing good!” or “you’re doing bad, don’t do that anymore!”
The actions of a neural network have some parallels in how the human brain works… with one major exception. The independent observer… the idea that someone is there telling your brain when it programmed itself correctly and when it didn’t program itself correctly. In some philosophical arguments you could term this “consciousness” but others aren’t quite convinced that’s all there is.
See, the claim that the neural network within your own brain that was trained to hit the baseball and process all the incoming sensor information is the totality of consciousness is just one example of how far we’ve fallen, scientifically… as a society.
Of course, that SAME evidence could be destroyed in 30 seconds by a scientifically aware nation…
Instead, because we’re not scientifically aware we see “oh look, an fMRI shows how parts of the brain light up and it precedes other parts… I too stupid to know what that really means but I’ll blindly trust the researchers”
I speak of neural networks again, because the REASON connectionists REALLY created them was to better understand the human mind. That we had decades of research leading up to these models and more research linking how these models function with how humans actually learn stuff.
So a study that so obviously depends on knowing these models of the mind exist, and then exploits the inevidable outcome that once you have been trained in an action you do not ‘process’ the individual aspects of said action… to conclude that conciousness is an after story…
Yeah, I’ve read so much nonsense like that, where we’re shaping our experiments with the intention of finding a known conclusion but not informing the reader what was researched to shape the experiment to get to said conclusion.
:smack::eek:![]()
Just to further push challenge, take a look at any of the slime mold studies and the whole “chemical intelligence” fiasco. Because it defies all claims of humans being like super cool with 1 hundred thousands genes (oh, that was disproved) and with the biggest brains (also disproved) and somehow superior to all other forms of life… the idea that an organism that completely lacks these “human characteristics” could possibly be intelligent has sent numerous uproars through the scientific community.
This is something that actually needs more discussion than people give it.
The so called “Turing Test” really is an argument that if something can match the function of something entirely different, then that something IS that other something. I do think you should re-read the opening and learn exactly what “the imitation game” really is. After all, it’s highly applicable to today’s time, if you read the actual text.
Still, the implication is that you UNDERSTAND what the function of consciousness is.
In the “Turing test” the notion of ‘conversation’ is thought of an aspect of consciousness… but this again brings us to issues.
A most simple program could be written to reverse the statements of what someone is saying in the form of a question, or an inventatiion to say more.
“I had a horrible day”
“Tell me about your horrible day.”
“I got a call from the doctors”
“What did the doctor say”
“I have to go in for more testing, but the results were inconclusive”
“When are you going back in”
Making a program that spits these replies back out is relatively easy…
But there are times when you might find yourself interacting with a human who gives the exact same replies!
People use the false fallacy of “moving the goal post” to otherwise avoid acknowledging that we’re utterly inconsistent with out definitions to begin with… but what it really all goes back to is… what IS the function of consciousness?
Here’s my favorite Turing Test question (provided the computer in question hasn’t simply been fed a canned answer to this exact question):
“If you were to administer the Turing Test to something else, to determine if it were intelligent or not, what questions might you ask it?”
Actually, Turing’s original paper had nothing to do with consciousness, but with intelligence. And there’s a reasonable argument to be made that intelligence is, in fact, a functional property—things that behave intelligently are intelligent; there’s nothing but behaving intelligently to being intelligent.
Indeed, Turing, in his original paper, explicitly brackets the question of consciousness, claiming that he does not think that the mysteries of consciousness need to be solved in order to solve the question of machine intelligence (or rather, of whether a machine can successfully imitate intelligent human behavior).
Now, it’s true that there’s a position, known as functionalism, that equates consciousness with a functional property; and thus, that holds that anything that instantiates the right sort of function is conscious. But the Turing test stands quite apart from whether that sort of idea is right.
A few misconceptions in the OP, I think.
Whether you can make a machine that is conscious is not really in dispute (outside religious circles anyway) as the brain is essentially a machine of some kind. Few would dispute you could change brain substrate and still have a consciousness.
Whether you can make a computer that is conscious is the thing that some philosophers like Searle debate, and is commonly known as “Strong AI”.
Neither of these things would typically be called the problem of consciousness. That is more about how an inner subjective experience can exist at all (to briefly state it, but there’s a lot more to explain than that).
How can a heap of atoms feel pain?
Supervenience
Low Order supervenience = conscious (i.e. probably a large number of Earth species and maybe close to our current state of AI…I dunno, I haven’t kept up for a while)
High Order supervenience = self conscious (i.e. a smaller subset of Earth species and more distantly in the future High AI)
I’m, by no means, an expert in this field, but, that’s how I understand things.
Actually it was something of a rhetorical question as the question of how matter feels pain is indisputedly not known at this time.
I’m (loosely) familiar with the concept of supervenience, but in this case it seems it’s just restating the problem. Yes pain is a phenomena that somehow comes about from the neural structure of our brains. How?