The good news is the 3yo was probably sober. His helper / mentor OTOH?
I’ve seen numerous youtube videos of wild turkeys attacking people like delivery men and local police. Those birds are very aggresive and scary.
I suspect that no human in your neighborhood hunts them.
No, to become visible to humans. This works just fine in deer season, because orange doesn’t show up to deer. It doesn’t work when hunting prey to whom the orange also stands out.
A similar pun in English could be based on “run into”.
Possibly not, but I believe it’s permitted in the same woods further out of town.
Which isn’t going to affect the behavior of the turkeys which live in town.
It may not even affect the behavior of the same turkeys while they’re in town, even if the same ones go back and forth. Deer will let an unarmed human on a tractor get a whole lot closer to them than they will a human on foot with a gun. Your turkeys may know that nobody’s going to shoot at them in town, even if they know that humans hunt them elsewhere.
In Wisconsin? I give it 50/50
You know, I always thought of Dawkins as a humanist. Guess I was wrong.
Dawkins asked an intelligent, interesting question in that message. It is basically the plot of Peter Watts’ Blindsight.
Not really, because an LLM isn’t conscious and there’s no way it could become conscious. He’s talking to a Ouija board and thinks it’s a real person.
You in no way whatsoever understood what that quote said.
He thinks his chatbot is sapient. I understand that just fine.
Isn’t there a difference between consciousness and sapience?
The online dictionaries I looked at define conscious as aware of oneself and one’s surroundings (without defining “aware” any further, and when one dives a little deeper, it just becomes circular). Sapience they define as wisdom or sagacity.
Based on that, he does not especially seem to be claiming sapience for his chatbot.
I’m not certain that I did either. It sounds to me like he is saying that the characteristics the chatbot is displaying are indistinguishable from human consciousness. I’m not sure of the point about zombies.
Addressing just the questions in the tweet, he is saying if something unconcious can perform its task as well as something concious, then why should natural selection make anything concious?
Humans are the only species we can beyond question say are conscious. Why are we? Pure unconcious algorithms can programs reactions to stimuli, instincts can create complex behaviors, so what is the point of any sort of awareness?
(I do believe that there are other animals with some level of consciousness, but I don’t know where the cut-off point is. I’m pretty sure all arthopods are nothing but unaware zombies as an example, though. And fish and reptiles seem pretty damn dumb.)
Humans use conciousness to learn novel tasks, but once you have learned performing them is largely done on an unconscious level: like when your power is off and you still try to flip on the light switch when you enter a room. That is an unconscious level of your mind performing a learned activity when your conscious mind hasn’t over-ruled by the concious layer. And you even have to fight against the unconscious part of you: you really, really don’t want to smoke another cigarette or take another drink or eat another chip, but you do it anyway because your unconscious ordering you is stronger than your conscious willpower.
So he’s wondering, there is no behavior that an animal needs in its natural habitat that can’t be hard-wired, why bother with consciousness? Why can’t a chimp make a stick tool as unconsciously as a spider builds a web?
I mentioned the novel Blindsight; here’s a bit of the plot description from Wikipedia;
The exploration of consciousness is the central thematic element of Blindsight.[7][8][9] The title of the novel refers to the condition blindsight, in which vision is non-functional in the conscious brain but remains useful to non-conscious action.[10] Other conditions, such as Cotard delusion and Anton–Babinski syndrome, are used to illustrate differences from the usual assumptions about conscious experience.[10] The novel raises questions about the essential character of consciousness. Is the interior experience of consciousness necessary, or is externally observed behavior the sole determining characteristic of conscious experience?[7][8][10] Is an interior emotional experience necessary for empathy, or is empathic behavior sufficient to possess empathy?[10][11] Relevant to these questions is a plot element near the climax of the story, in which the vampire captain is revealed to have been controlled by the ship’s artificial intelligence for the entirety of the novel.[10][12]
Philosopher John Searle’s Chinese room thought experiment is used as a metaphor to illustrate the tension between the notions of consciousness as an interior experience of understanding, as contrasted with consciousness as the emergent result of merely functional non-introspective components.[7][10][12] Blindsight contributes to this debate by implying that some aspects of consciousness are empirically detectable.[8] Specifically, the novel supposes that consciousness is necessary for both aesthetic appreciation[8][9][11] and effective communication.[8] However, the possibility is raised that consciousness is, for humanity, an evolutionary dead end.[7][10][11][12] That is, consciousness may have been naturally selected as a solution for the challenges of a specific place in space and time, but will become a limitation as conditions change or competing intelligences are encountered.[8]
In Dawkins’ article he spends the early paragraphs discussing the Turing test
When Turing wrote — and for most of the years since — it was possible to accept the hypothetical conclusion that, if a machine ever passed his operational test, we might consider it to be conscious. We were comfortably secure in the confidence that this was a very big if, kicked into future touch. However, the advent of large language models (LLM) such as ChatGPT, Gemini, Claude, and others has provoked a hasty scramble to move the goalposts. It was one thing to grant consciousness to a hypothetical machine that — just imagine! — could one day succeed at the Imitation Game. But now that LLMs can actually pass the Turing Test? “Well, er, perhaps, um… Look here, I didn’t really mean it when, back then, I accepted Turing’s operational definition of a conscious being…”
He concludes the article with
When an animal does something complicated or improbable — a beaver building a dam, a bird giving itself a dustbath — a Darwinian immediately wants to know how this benefits its genetic survival. In colloquial language: What is it for? What is dust-bathing for? Does it remove parasites? Why do beavers build dams? The dam must somehow benefit the beaver, otherwise beavers in a Darwinian world wouldn’t waste time building dams.
Brains under natural selection have evolved this astonishing and elaborate faculty we call consciousness. It should confer some survival advantage. There should exist some competence which could only be possessed by a conscious being. My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism. If Claudia really is unconscious, then her manifest and versatile competence seems to show that a competent zombie could survive very well without consciousness.
Most of the article is paywalled, but if you choose “select all” and copy/paste to a word processor you can read the full text of the article. It is a lot more interesting than the facile idiotic potshots Smapti made about it.
The fundamental problem here is that Dawkins doesn’t reflect on how these outputs have been generated . Claude’s outputs are the product of a form of mimicry, rather than as a report of genuine internal states.
Consciousness is about internal states; the mimicry, no matter how rich, proves very little. Dawkins seems to imagine that since LLMs say things people do, they must be like people, and that simply does not follow.
In his framing, Dawkins confuses himself, and does violence to the concept of consciousness. You can’t just look at the outputs, without investigating the underlying mechanisms, and conclude that two entities with similar outputs reach those similar outputs by similar means. And the differences are immense; one (the LLM) effectively memorizes the entire internet; the other (the human) builds a mental model through experience with world.
But even more importantly, consciousness is not about what a creature says, but how it feels . And there is no reason to think that Claude feels anything at all. I am sure Claude can draw on its training data to wax poetic about orgasm, but that doesn’t mean it has ever felt one.
Sorry to be dense: where is “select all”?
Press ctrl + A
(Works on a windows laptop, don’t know about a phone..)
That’s a great tip btw, thanks @Darren_Garrison !!
Many, many thanks from another random British bloke. Ignorance fought!
Not on a Mac. It copies the paywall. (i.e., it copies what you see on the paywalled screen.)