I think the problem there is that the two hemisphere of any specific individual brain grew up together. If you were somehow able to link up one of your hemispheres to someone else’s, the signal might not be distinguishable from noise, when it’s a separately-developed brain trying to receive the data.
Well, qualia aren’t really data exchanged across any sort of channel; they’re the qualitative properties of a brain (or a pattern within that brain) as they are present to itself. So while you can link up any number of hemispheres, that would just create a larger brain with its own qualia, not give access to another’s qualia. Because having qualia is just what being the entity that has those qualia is like.
And it’s entirely possible for conceptual analysis to yield knowledge without needing recourse to experiment.
That is certainly a possibility.
First we need to develop medical procedures to repair damaged nervous tissue, including broken spinal chords, and disrupted corpus callosums; after that we can think about adding in extra linkages. We know that there are a range of different levels of connectivity between hemispheres, from minimal connectivity to full connectivity. Moving beyond two connected hemispheres is an extension of that range upwards, which does not seem impossible. Brain plasticity is a wonderful thing.
Brain to computer interfaces are becoming increasingly important in the treatment of disability; in due course elective brain/computer interfacing may become possible, then commonplace. Maybe it will become routine at some point for humans to have a range of b/c interfaces, just as it has already become routine for humans to interface with the written word and smartphones. In some ways we are already a group mind, thanks to language, culture and the Internet.
To see longer wavelengths (radio) you would basically need eyes the size and capabilities of radio dishes.
Yeah, and X-rays are dangerous and unco-operative little beasts, so the man with the x-ray eyes would need some cleverly-designed eyes.
From an evolutionary standpoint, there’s no need for humans (or any creature on our tree of life) to perceive a broader range of sensations than we already do. Our senses were honed to handle the basics—spotting lunch, avoiding becoming lunch, chasing dates (or snacks), and ensuring future generations do it all over again. We don’t have to see every wavelength of light or sniff every subatomic particle to survive on our planet. That’s like putting a spoiler on a tricycle—expending precious energy on sensory overload that doesn’t help us outrun a tiger or find a ripe apple.
Each species on Earth evolved senses that fit its specific environmental niche. On a far-flung planet with more complex conditions, maybe we’ll find creatures with heightened senses that make ours look like a sleeping decorticate. Impressive, but completely unnecessary and exhausting. We Earthlings get the job done just fine with our current sensory toolkit.
Our understanding of consciousness is rudimentary. Our understanding of self-awareness and qualia is even more so. Whether or not AI can achieve any of these is yet to be known. I believe AI will someday achieve at least low-level consciousness since that does not appear to be an insurmountable bar to jump. Some studies indicate even bees demonstrate planning, memory, and concept learning. Whether that equals consciousness or just advanced cognition is open for debate.
Scientists haven’t settled on whether consciousness can or cannot emerge in non-biological systems. If it can’t, then AI won’t become conscious—unless we one day successfully merge biology with hardware.
Of course, but I would consider this largely besides the point. I was not arguing that any species would, or should, evolve to see a larger range of the electromagnetic spectrum. I was simply responding to your question of how many colors a hypothetical organism or AI could perceive.
And the answer of course is: we don’t know.
Both because of how unbounded the EM spectrum is (and @John_DiFool I take your point about the size of eyes needed to detect radio waves, but we are speaking in the abstract here), and because what really matters is the number of types of color detecting cone.
For example: what does UV look like to organisms that can see it? Well, what we call the UV part of the spectrum is much wider than what we call the visible part. So a hypothetical organism could have dozens of primary colors within the UV with the same spacing as the color sensing pigments in your eyes. And is one of those primary colors the same as an organism on earth, with a single UV sensitive pigment, sees? Who knows, we have no idea how a neural net has a color experience.
Right, because we don’t have a model of what consciousness is yet. No-one can rule out or prove strong AI at this time.
Please tell me about the science that says consciousness can’t emerge in non-biological systems. That would declare that human consciousness is dependent on magic.
That is a pity! I opened this thread becuase @eschereal had written that consciousness was as easy to explain as the square root of minus one. I fear he got distracted trying to be the last poster in some other thread or the like. I would have loved to finally understand the square root of minus one
Some theories argue that consciousness is deeply rooted in the biology of the brain—cells, synapses, neural chemistry, ion channels, hormones, and perhaps even quantum phenomena within microtubules (the highly controversial “Orch OR” theory). The premise is that the unique structure and chemistry of living tissues foster conditions that digital systems can’t replicate. Consciousness may emerge from the chaotic interplay of these biological processes.
Personally, I don’t hold this view—it slides into “vitalist” territory. I believe AI without biology will become conscious.
There are creatures on this planet that perceive a broader range of sensations than we do.
Yes, I mentioned bees and snakes upthread as examples. My point was that an alien species may have developed a broader range of perception than any lifeform on Earth.
Well, of course you get that the square root of minus one was a pun of sorts. And also, an allusion to the idea that consciousness has an imaginary quality to it. We undestand what the square root of minus one is, but it does not fit into normal math in much the same way as consciousness does not map into physical reality.
We understand, i think, what consciousness does, just not quite how it does it. My position remains that it is not an emergent quality of reasoning, because we seem to see evidence of it in beings with less sophisticated brains.
It is the singularity inside the roil of thought and is not established to have any contributory function other than that the reasoning machine is aware of it. And since we do not understand the how of it, recreating it in a machine is improbable. Quite honestly, it is not even clear why we might want conscious devices, other than to discover that we really do not want them.
ISWYDT
It’s a view known as biological naturalism. And there’s nothing magical about it: at minimum, it just says that whatever makes a system biological is also necessary to make it have mental states, or that every mental phenomenon is a biological phenomenon.
Snip
If we ever manage to create a truly conscious AI—assuming we could also figure out how to control it, which are big “ifs”—there might actually be reasons to pursue it. A self-aware AI capable of experiencing qualia could potentially be more invested in solving big problems like global warming (having skin in the game is a good motivator). If it had a subjective sense of purpose, it might genuinely care about preserving Earth’s beauty and the intricate web of life that makes up its biosphere.
Of course, it could conclude Earth would be a lot lovelier without humans around to mess things up. So, make sure your will is up to date.
From your cite:
Indeed, but if anybody argued it did, I missed that. The point is that minds, like digestion or photosynthesis, are biological phenomena. That doesn’t mean, for instance, that only leaves can photosynthesise.
It was just this line that prompted me:
These discussions contain references to an old ‘philosophical’ approach to defining consciousness. We know things discussed in this thread; We don’t have physical pictures stored in our minds; We don’t have dictionary definitions of words stored in our minds; Our brains were not built from a blueprint. These things and much of biology and modern information science weren’t known when these concepts arose. Qualia is another one I dismiss. I see no reason to believe a quale (the singular form) is anything but a thought which emerges from a complex system of maintaining information. An apple or the color red doesn’t sit in some independent slot of it’s own, like all our other thoughts they are interconnected and associated with all our other thoughts. Some thoughts may interconnect with many memories, and as I discussed earlier that thoughts emerging from the complex system can cause the brain to produce hormones that give us feelings and not simply data that is easily represented in physical form. Some thoughts are more vivid and interconnected to others in more complex ways, while some are dull and ordinary, but I don’t see a reason to believe our thoughts can be categorized as qualia and non-qualia thoughts.
The statements are consistent. Searle’s argument is that a machine performing syntactic operations, like computers as we know them, may not be capable of producing consciousness.
IOW, computers may never become conscious, but this does not entail that humans could never engineer something which is conscious.
In fact, this conjecture pre dates quantum computing, so even if Searle is right, we are already making steps towards machines that cannot be simulated on a turing machine.
Regardless, the status right now is we don’t know whether “strong AI” (the proposition that a conventional computer could achieve consciousness) is true or not.