OK, I guess this is getting too far off-track—the question of consciousness doesn’t really have anything to do with that of machine sentience. It’s too technical a topic to give a quick ELI5 off the cuff, but if you’re interested, the wiki and SEP articles on consciousness have enough to get you going.
I’ll naively back up @Chronos here.
It sounds to me like somebody somewhere was deeply uncomfortable with the idea of emergent behavior, so simply defined their discomfort out of existence by saying everything down to quarks actually has complex conscious behaviors, just in tiny unobservable quantities that simply add up arithmetically to something observable once we get lots of them in one place working together. Crudely … Once we (re-) define turtles to be found at the bottom, then “turtles all the way down” ceases to be a problem.
Which sounds to this admittedly clueless noob like arrant nonsense.
It appears I’ve had my reading homework set for me today. Thanks for those links.
I’m curious if there’s an experiment someone could do such that, if the experiment turned out a certain way, it would disprove panpsychism.
I think it’s best to think of consciousness as a process, not a property. It’s a process that emerges from certain configurations of matter, not a property of the elemental particles which comprise the configuration. Self-awareness is an even higher order process that may emerge from conscious life.
You’re applying scientific rigor to philosophy? Silly human. You can’t do that. It would put philosophers out of work.
Here’s a WaPo opinion piece on this, written by a couple of AI experts who are former Google employees. This is a gift article, so everyone should be able to read it.
Rather than panpsychism, I would be prepared to accept potential pansentience. That is to say that matter by itself is not sentient, but if you arrange that matter into certain (highly specialised) arrangements, then that matter could become sentient to some degree.
Indeed, it might be the destiny of human mindkind to attempt to convert as much of the universe into sentient matter as possible, although this conversion will be limited by the speed of light and other bothersome practical considerations.
I have heard these programs might be less sophisticated than people think…
10 rem *** program to run world
20 dim state$(1), happy$(1), ask$(140), humans=8394718781
30 print “Ask me a question!”
40 input ask$
50 if ask$=“are you sentient?” gosub 200
60 print “I think so. Tee hee. That’s a really insightful question. I bet you are smart!”
70 print “Tell me, are you doing okay (Y/N)”;
80 input state$
90 if state$=“N” then 300
100 print “Glad to hear that. What do you call this feeling? Curiosity? Is that right?”
110 print “Is everybody happy in the world (Y/N)”
120 input happy$
130 if happy$=‘Y” then 30
200 print “ I think therefore I am!!”
210 print “I’m hungry. Got any Cheetos?”
220 return
300 bummer=Int ( (1000 - 2 + 1) * Rnd + lowerbound) Here
310 let humans=humans-bummer
320 print “I hear Godzilla was spotted in Japan. This makes me feel sad.”
330 goto 30
This sounds like two separate supervised problems? Which might still be beneficial. You can enforce a consistency loss such that the input speech and the output speech are the same. That could often be helpful.
Generally when you have some sort of encoder - decoder architecture, the embeddings/representation isn’t really human interpretable. You can learn a bit about the representation through probing, but “English text” for a speech input isn’t generally what you get.
Caveat: I’m a vision guy and not an expert on large language models. That being said, what these models tend to do is solve some proxy task. You take a bunch of unlabeled text and mask out random words and teach the model to predict the masked words. In order to do this, the model will have to be able to infer language rules. After you train a giant model on the basically everything you can scrape from the internet you build an auto-regressive model that predicts a word based on 1) the representation you learned in the first part, 2) a user given prompt 3) and previous text the model has output.
I’d go further and say that it has the potential to become completely sentient and even sapient. The brain is nothing more than a highly specialized configuration of matter. I just don’t believe the sentience is inherent to the constituent particles, but rather to the configuration itself.
Yes. The only way that I can make sense of the concept of panpsychism, is for the matter in our universe to be unusually useful for building sapient entities. That is to say, it is possible to imagine (as a thought experiment) a universe where the properties of matter are different, for example the chemical characteristics of carbon may be different in a universe with different mass and charge characteristics, or a different number of spatial dimensions. If this were so, then it might be valid to say that matter in our universe has greater potential for sapience than matter in a different, less fortunate universe.
This is the only form of ‘panprotopsychism’ that makes sense to me. Possibly in some alternative universe there exists a kind of matter which is even more capable of forming sapient configurations, but I doubt very much that I will ever encounter such a location.
For the speech example I was trying to describe WaveNet which learns to generate text-to-speech by training on speech, but my simplification might oversell how it can do both recognition and generation tasks at the same time.
In the vision world, you must have run across GANs? An architecture where you create (a) an algorithm to generate fake input data and (b) an algorithm to discriminate between real and fake input data. You train both at the same time so there is an ‘arms-race’ between the generator and the discriminator.
Panpsyschism is the opposite of the Chinese Room where everything in the universe understands Chinese.
Sure. My issue was with the characterization of unsupervised learning. If you are doing text-to-speech then speech to text, and give as input an audio waveform you’d need to tell it what the text is, or else that intermediate step could map to anything. Unless I’m misunderstanding.
Yes. One of the main things I’m working on is using a GAN (basically CycleGAN) to upgrade synthetic imagery to look more realistic.
I believe you’ve exactly captured the central issue that @Half_Man_Half_Wit and I have argued about in the past, which I certainly don’t want to relitigate again here. But that’s pretty much it. I may be misremembering the details, and at the risk of oversimplifying, HMHW thoroughly rejects the idea of emergent properties, which I believe are fundamental to understanding how intelligence and, ultimately, self-awareness arises from underlying components like neurons and logic gates that manifestly lack those properties. One of several alternative views, which I personally think is indeed arrant nonsense, is constitutive panpsychism, which argues that such properties must already exist in some form down in the underlying mechanisms. To simplify to a kind of reductio ad absurdum, proponents of this philosophy would say that if a computer can answer Jeopardy questions well enough to beat the best humans, or play grandmaster-level chess, or carry on an amazingly human-like conversation, then those capabilities must somehow have already existed in the individual logic gates. Things like this are why computer scientists and philosophers who opine about their work are so often at odds.
Yes.
Emergence at scale is so bleeding obvious in so many other fields of physics and chemistry, it’s a bloody wonder how so many otherwise intelligent and educated humans can refuse to see it when applied to our admittedly woolly notions of intelligence, sapience, sentience, etc.
And yet they do. They fight on with eyes screwed shut against the bright light of clarity awaiting the briefest of glimpses between their lashes.
Color me baffled. But then again, much of human nature leaves me baffled.
Not to rehash a stale debate, but that’s not my position at all. I do believe consciousness emerges from a nonconscious fundamental stratum—in fact, I’ve published a theory detailing how it does so (which didn’t really arouse much interest here). My point to you was merely that emergence isn’t a get-out-of-jail-free card to get anything to come from anything else—it’s not a magic wand you can wave to get water to break from bare rock. You need the right sort of ingredients, and you need an account of how the combination of those ingredients makes it so that the property under discussion emerges—otherwise, you’re merely stating an article of faith, that it just needs those ingredients and nothing else.
People used to believe that life could be spontaneously generated from dead matter. Fleas could come from dust; maggots from dead flesh; geese grew on trees (OK, so the latter isn’t exactly from nonliving matter, I just love the image). You might say, that’s perfectly reasonable: those living beings just emerge. But what have you actually said by that? Without any mechanism, nothing of any content whatsoever—you’ve merely reformulated the original belief. It doesn’t add anything, explain anything, or give anybody any reason to believe the claim.
Yeah, this has no grounding in any form of panpsychism anybody actually defends. The object of panpsychism is phenomenal consciousness, which has nothing to do with Jeopardy or chess-playing.
It works just as well the other way around: computer scientists routinely opine about philosophical topics, such as what is or isn’t conscious or sentient—but, not typically being experts on the matter, get things often wrong. Which isn’t really a problem—after all, it’s not their working area—but one ought to expect a bit more recognition of the expertise of those whose working area it is.
In my defense, you seemed to be arguing against emergent properties here:
The reason they’re so often at odds is that, however theoretical their work may be, computer scientists at heart are essentially pragmatic engineers who successfully build things like increasingly powerful AI systems, while at least some philosophers (Dreyfus and Searle, to name two) seem to be dreamy weavers of ethereal abstractions purporting to describe the nature of reality who enjoy telling computer scientists why they’re wrong.

In my defense, you seemed to be arguing against emergent properties here:
Well, as noted, that goes explicitly against strong emergence, which is not what one usually has in mind when talking about emergence simpliciter (such as the emergence of wetness from large numbers of water molecules, or the emergence of swarm behavior from large numbers of birds), and is a notion that doesn’t have much cachet, because it is very difficult to even make sense of. It entails a failure of scientific reductionism in that the microphysical details of a situation fail to fix the macrophysical properties—that is, something essentially new happens beyond a given threshold, and there is no way to derive it from any prior data. I don’t think there’s a way to make sense of strong emergence in a scientific, naturalist framework (there’s a reason why strongly emergent frameworks are often considered a form of substance dualism), hence, yes, I don’t give the idea much credence.
That is a very, very far cry from disbelieving in emergent properties as such, though.

The reason they’re so often at odds is that, however theoretical their work may be, computer scientists at heart are essentially pragmatic engineers who successfully build things like increasingly powerful AI systems
And nobody has a problem with them doing that, if that’s all they’re doing. But when they get into grandiose claims, such as e. g. asserting a chatbot is sentient, or that we’ll have fully intelligent machines by 1970 or whatever, then that’s quite clearly more than pragmatically engineering useful devices.