I personally roll eyes most AI dystopian fears but boy. This technology getting to a point to be abused by humans is in my mind a given, but the nightmare visions of our very thoughts harvested by a controlling AI are seeming less crazy sci fi now …
Technology aimed currently for the purpose of helping those who do not have the motor control to speak, using, at this point implanted tiny electrodes to read the intent to speak and expressing it for them. Also able, to some degree, to read thoughts not intended to be shared, private “inner speech” …
Also some discussion on how people vary on “inner speech” and what amount of thought it represents … but my thoughts did jump directly to the place @Elmer_J.Fudd ’s did: the abuse potential of this by 1984ish powers that be, and quickly from there to the what if the technology becomes less invasive to implement and perceived as another convenience? Think “Assistant, do x, y, and z” rather than having to even say it out loud with other humans hearing it, but the AI hears more than we intend it to …
It seems not so unrealistic that surface electrodes could be trained to read theses thoughts and that people would not only consent but pay for it. Hell. Once it can read a thought and an intent the ability to implant a thought and an intent, to stimulate the same pattern read, is not so very far fetched fantastical.
A technology that can be abused almost certainly will be.
Said of the first guy who picked up a stone axe and swung it at another guy because he wanted his woman, his bear-skin cloak, or his really nice collection of seashells.
Even without the invasive nature of this device, learning models are being developed to evaluate user input to predict emotional responses to optimize for engagement and ultimately manipulate people in ways that completely subvert rational thought and normal restraint on a very personal level that make the current algorithms driving social media platforms to promote inflammatory ideas and memes. You may well find inside of a decade that your ‘smart’ devices are constantly monitoring your behaviors and physiological cues to develop a nearly perfect profile of your emotional state to be used to predict and/or direct every important decision you make with subtle and subliminal stimuli that you cannot outthink or protect yourself from.
Probably a rule for technological application only eclipsed by the absolute certainty that it will be used for porn. Sure fertility statues, that’s the ticket.
Actually reading my private speech is a leap in proof of concept to me though.
In another dystopian surveillance tool, read about WhoFi , which can identify you based on your interference pattern in wifi and other EM background.
Researchers in Italy have developed a way to create a biometric identifier for people based on the way the human body interferes with Wi-Fi signal propagation.
The scientists claim this identifier, a pattern derived from Wi-Fi Channel State Information, can re-identify a person in other locations most of the time when a Wi-Fi signal can be measured. Observers could therefore track a person as they pass through signals sent by different Wi-Fi networks – even if they’re not carrying a phone.
I’m a little less alarmed. I believe we use language for specific kinds of thinking: mental arithmetic, for example. If you’ve learned multiplication tables, you learned them in some language or other. When reading, at least the majority of people hear the words in their head, so I would think that might get picked up. If you’re trying to remember a phone number, you will likely be reciting it over and over to yourself. Or, if you’re trying to compose some kind of writing in your head, you’ll be thinking words. But general thoughts? I think how much of them are verbal and clearly formed is probably overestimated.
If some government agency arrested you and wanted to get specific information, they’d first have to make you verbalise it to yourself. Could you avoid doing so?
If criminals wanted to get your PIN, they could probably do it by recording your brain while you were typing it, but it’s unlikely to pop into your head randomly.
This is a more plausible fear, and the sort of thing AI doomers worry about.
We use language for most forms of thought, it’s the software to the brain’s hardware. Without language the mind simply lacks the tools needed to internally represent most forms of thinking; I recall how Helen Keller described the period where she had no language as “the time of no thought”, for example. Without words, she couldn’t think.
But she clearly did think. Preverbal children still think. Animals that we do not even suspect have language think. And many people claim at least that they have no “inner voice”.
There is a fair amount of evidence that much of our conscious experience of decision making is an epiphenomenon, occurring after the brain has already made its choice under the hood. Inner speech for those of us who experience it may be that. Not key to or required for the thought process at all.
But even if not key or even something we are all consciously aware of it could conceivably be read.
I count myself as an inner monologue person FWIW but really not everyone is.
Recent threads about it:
No need to cover it again in detail here. Just open to the possibility that even without having the conscious experience of the monologue the firing might still be going below awareness, even if not required for thought, and still readable.
I don’t think we have any way to measure sapience per se. And this can quickly spiral into another definitional philosophical waste of time. For my understanding “thought” does not require “sapience”; I am not even sure that “thought” requires conscious experience let alone a sense of self. If you want to argue that a dog or a cat or a pig or a crow for that matter does not “think” then feel free. I cannot prove they do, just as I cannot prove any human other than myself does, and both arguments are just as silly to me.
But I pretty sure that humans without language still think, and humans without an inner monologue also think. As someone with a definite inner monologue I am also aware of nonverbal thought processes, visual manipulation and otherwise. Not the point of this thread. Maybe reopen one of the past ones?
I am among those who generally have no ‘inner voice’. When I am reading, I certainly do not sound out words in my mind. And most of the time when I think about things in general, I do not ‘talk to myself’ about them.
There are exceptions… such as when I am trying to recall the words to a song (or write lyrics to a new one).
On the other hand the remarkable success of LLMs does suggest that a significant amount of human knowledge is actually encoded in language.
In some way perhaps I am using language as ‘software’, but not at a conscious level…