Science Fiction Debate: One-Way Brain to Radio Interface


How might the world change after the invention of a practical, non-invasive, one-way brain-to-radio interface? Below I describe one such fictional invention. What are the implications, short-term and long-term, for technology, social interactions, privacy, ethics, law, diplomacy, and other aspects of life as we know it?

Key details in boldface. I recommend skipping everything else unless you find the bold text interesting.

(Science Fiction) Scenario:

In 2021 Alphabet, Inc. subsidiary Verily files patents related to brain wave monitoring. In the wake of various geo/political scandals and disease outbreaks the story is relegated to health and medical technology blogs, past the first page of major news aggregators.

It is Spring 2023 at the Google I/O convention and CEO Sundar Pichai is on the stage. He says,

Many people are still using their phones instead of Assistant (“okay, Google”) because they find voice commands embarrassing. Why just the other day I was jogging and opted to pull out my phone to switch songs rather than disturb nearby pedestrians with “okay Google, play Wannabe by the Spice Girls.” Or maybe I’m in a loud environment so I have to pull out the phone and type.

Introducing Google Telepath. [He holds up a stylish if slightly bulky wraparound wireless ear headphone set. It looks somewhat like a bone conduction headset mixed with normal wireless earpieces.]

You don’t have to say “okay Google” any more, just think it and Telepath will send the command straight to your smart device. Just think, “okay Google”, and you’ll hear a little bell to let you know you’ve activated Telepath and it’s listening for the command. You hear an end bell when it’s done listening. Let me demonstrate. Let’s say you are listening to a speech and you want text a colleague something discreetly.

[His phone is displayed on the screen. He puts the headset on and stands still. A bell plays through the loudspeakers, and Google Assistant pops up on the screen with the prompt “Hi, how can I help?” The command appears below, “Send message to astro” Another bell plays over the loudspeakers. Astro’s picture appears on the screen then Google Assistant is heard and seen asking: “Okay, what’s the message?” He doesn’t say anything and the command appears on the screen, “Just announced google telepath”. Google Assistant is heard and seen saying “Sending your message Just announced google telepath”.]

It’s as easy as that. Telepath also supports touch commands if that’s your fancy, and you can customize what each gesture does… These are also top of the line headphones in the conventional sense… lightweight… comfortable… hours of usage…

Google Telepath will be available Q4 2023 in a variety of colors for the low price of $399

We will be announcing a public API for developers to respond to commands from Google Telepath this summer… it will require Android 14… other speakers will give more details on that…

Google plans to announce the following for developers in the Summer,

Developer Info (Click to show/hide)
  • The raw-brain-wave-data-to-plain-language algorithms will not be published.
  • Google Telepath will be a closed platform.
    • The software interface between Google Telepath (the device) and the Google Telepath App (installed on a paired smart device) will not be published.
    • Google will not provide support for any third-party developer hoping to interface directly with the Google Telepath device rather than through the App’s API.
    • No third-party apps will be permitted on the device itself. The device itself will not feature Google Play Store.
    • Google will not support any form of hardware modification.
  • The Google Assistant API will remain unchanged to the maximum extent possible.
    • Developers will not see a difference on their side if an App Action or media app command was initiated verbally or mentally.
  • For developers, two new Android permissions are being added:
    • One for access to brain wave data,
      • Developers will be subject to a strict usage agreement similar to other body metrics
      • Reminder, the Android system will not facilitate direct access to Google Telepath devices
    • One to request mental input.
      • This will be treated similar to the Microphone permission for voice input.
  • Developers who request the new permissions are required to justify their requests to the user and report usage, as with other “dangerous” permissions.
  • Users will see one new permission group for MMI (Mind-Machine Interface), similar to other input devices such as Camera or Microphone. Apps that request access to brain wave data will show the request under the existing Body Metrics permission group; however, already-installed apps that require the new brain wave data permission will be forced to obtain explicit user permission before updating.
  • The Google Telepath API will give developers access to raw biometrical data
    • Raw biometrical data is only acessible in real-time, up to 30 seconds per request

Mechanism of Operation

Your opinions may well rest on the mechanism of operation so I’ll do my best to think one up.

The underlying scientific and technological advancements in this hypothetical are,

  1. Advancements in brain wave monitoring with conveniently placed small-scale non-invasive electrodes on the temples and behind the ears (patented by Google);
  2. The discovery of a correlation between one’s inner voice (for those who have one) and brain wave activity (a.k.a. entrainment) (research published in peer reviewed journals); and
  3. A set of computationally cheap algorithms to quickly correlate brain wave activity with the mental command “Okay Google”, with a low false negative rate (trade secret, Google)
  4. An algorithm to quickly and reliably translate brain wave activity to text commands (trade secret, Google)

The device operates via non-invasive electrodes over the temples and behind the ears. A paired smart device and related app is required for processing power due to weight and bulk restraints inherent in light headwear. The device itself has four modes,

  • command: maximum power is directed to detection and preprocessing of brain wave commands, while normal bluetooth headphone capabilities are given a backseat
  • listen: minimum power is directed to detection and preprocessing of brain waves, optimized for detecting “okay Google”, in addition to normal bluetooth headphone capabilities.
  • phone: acts like normal bluetooth headphones, brain waves are not monitored at all.
  • off

In normal usage, the device will automatically change between listen and command mode (the user will hear notification bells signifying the mode switch). Privacy minded users can customize touch commands to manually enter and leave phone mode. By default, it constantly monitors brain waves, which are translated into a digital stream, encrypted, and constantly broadcasted over ultra high frequencies to the paired smart device. More neurofrequencies are monitored and processed in higher fidelity when the device is in command mode; however, for practical reasons the device monitors and transmits as little as possible to enable downstream devices to discern conscious inner voice.

While the Telepath device is in listen mode, the smart device processes the data and can determine if the user has thought “okay Google”. It does this via a complex heuristic algorithm described above (point #3). This heuristic algorithm is custom tailored for the user based on data collected during initial device setup, namely neurological ideosyncracies, some of which probably correlate with demographic groups like age and dialect, and some of which probably don’t. The heuristic algorithm must have a low rate of false negatives; data identified as containing a mental command is always sent to Google’s servers to rule out false positives, provide translation into text commands, and to train Google’s mental-command-recognition artificial intelligence.

If the Telepath device is already in command mode, the smart device knows it has a mental command coming over the radio so it skips the heuristic algorithm and forwards the data straight to Google’s servers (decrypted and re-encrypted of course).

Either way, if the smart devices has identified a mental command, it forwards re-encrypted brain wave data over the internet to Google’s servers for processing. This could be over wifi, cellular, satellite, DSL, fiber, or probably a combination.

Google’s servers utilize the more expensive algorithm (point #4 above) to translate the data into text commands, and a signal is propogated back to the smart device with the translated API command. Finally the smart device signals the Telepath device to exit command mode as appropriate, though there is a soft limit of forty seconds of continuous data collection in command mode before the Telepath device switches back to listen mode.

Google’s Influence

I included so much information about Google’s particular hypothetical product because in this scenario, Google will have exclusive control of the technology for at least a few years (patents + trade secrets); also, the quirks of the initial product may well set lasting precedents for the entire technology, if not ripple effects on the whole of society. Here are a few of the potential precedents mentioned above, essentially a FAQ about the device:

  • published research and patents on brain wave monitoring, hardware
  • proprietary algorithms for brain wave interpretation (trade secret - not even patented)
  • wireless mind-machine interface over ultra high frequencies
  • dependence on paired smart device for basic functionality
  • dependence on corporate servers for real-time processing
  • default always-listening, always broadcasting functionality
  • needs new Android version (incompatible with old smartphones/tablets)
  • price point ($400)
  • form factor (wraparound headset, lightweight)
  • name (Google Telepath)

To these I’ll add, the user experience won’t be perfect.

Google will inform some significant number of people that they are incompatible during initial device setup. Most of these people will be using the device wrong so the notification will go to a troubleshooting page. Some people will simply not match the data Google systems trained on, be it due to their unusual dialect or any number of other factors (including mental and speech disorders).

If it is true that some people have no inner voice at all, they will never be able to use this technology. (In this hypothetical, at least most middle-class people in the first-world are presumed to possess an inner voice when focused on it, with minimal to no training.) The company probably won’t even agree, internally, if it is possible to completely lack an inner voice vs. the algorithm is flawed. Google’s technical support will not suggest that an individual lacks an inner voice. They will simply note that the technology is new and intricately tied to ongoing research, and is not compatible with all people at this time.

It will take a couple weeks of regular use for most to get comfortable with Google Telepath. It takes focus, not intense focus, but some is required. Even at its best the underlying command recognition system will occasionally mess up. There will still be “autocorrect” errors. The system still relies on an internet connection as a middleman so for the time being that can become a performance liability. It won’t work as well for people under the influence of certain drugs, potentially including prescription drugs and varying throughout the day. Children and teenagers will have more trouble with Telepath the longer they use the product, simply because their brains are developing more drastically at those ages.


If such a scenario came to pass, what would be the implications, short-term and long-term, for technology, social interactions, privacy, ethics, law, diplomacy, and other aspects of life as we know it?


This probably doesn’t matter as far as discussion of your hypothetical scenario, but just noting that it seems to me that you’re describing a Brain Computer Interface, and rudimentary versions of this already exist.

It would indeed be a brain-computer interface, and I’ve read a little about developments in that field. You could plausibly say we almost have the technology. It could drop in the near future, probably within my lifetime - in the scenario I chose this spring. An unrealistic timetable, but easier to discuss.


Place your bets as to who will hack it first.

It will be a 1,000,000,000 way tie.

It depends on what is being hacked.

In the short term the most valuable secrets are the algorithms. Everybody and their mother will want the algorithm(s) to translate brainwave data into readable text.

There are two sets of algorithms - one is distributed to everyone’s phone via an app, and is therefore easily hackable. But this algorithm is only good at recognizing the thoughts corresponding to “Okay google”. I would expect these to be extracted in a matter of hours if not minutes, but it would take much longer to reverse engineer.

The more valuable algorithm is kept under lock and key in Google’s central servers. This is the one that translates thoughts into text. Nobody knows how it actually works - it is built and maintained by artificial intelligence using a vast database of user data.

The government will want this algorithm for military purposes. The industrial-military complex will want this algorithm to sell to the government. Nation-states will want the algorithm to jumpstart mind-reading research. Competing tech firms will want the algorithm to exploit the new technology - especially Amazon (Alexa) and Apple (Siri).