Just an idle thought: it occurred to me that anywhere in a modern city there are probably several WiFi networks within range. Several cell phone (mobile, for UK readers) networks. Then you could pick up quite a few AM or FM radio stations, probably a few TV channels (if they haven’t all gone cable). Plus a lot of other stuff: shortwave, Police and Ham radio channels etc etc.
So approximately what does the total information density amount to? Or to put it in practical terms: if you had a battery of receivers for all the bands in use, how many bits per second could you extract “from thin air”?
Related question, what’s the total energy density? By the way, I’m NOT trying to lead up to any 5G conspiracy nonsense; every claim of “electromagnetic sensitivity” that has been tested has been thoroughly debunked, as far as I know!
RF can include lightning, atmospheric noise (solar activity), and manmade noise (electric motors, some solar sytems, etc. even remote controls). Whether it has any ‘information’ depends on how it’s modulated, I suppose.
Bandwidth increases with frequency, power decreases with frequency (and distance, at something like square to the distance… I cant remember), depending on the transmitter. Things at microwave frequencies would “heat up” rather quickly if outputting high power. A receiver’s sensitivity is usually measured in microvolts. So there is potentially ‘information’ from DC up to Terahz, but not always being transmitted and going around at very low power.
ETA I (and many others) made a music transmitter/receiver from a laser pointer module and arduino that worked great. Milliwatts at something like 600nM (terahertz).
I’m an engineer and an amateur radio op and think about this stuff a lot. It becomes something of an angels on a pin or ‘capacity for human intelligence’ type question. You can find as much or as little as you like. Do you count lights/optics? Pixels? Bars in a barcode? Why or why not?
Can I offer a thought experiment?
When ‘geared up’ for a typical day of errands (phone, keys, wallet, purse), how many RF dependent devices are on your person?
Keyring: car key has antitheft transponder ‘chip’ and a separate active system for the buttons (lock, panic, trunk). Most cars with keyless lock/start have a third transponder for that, too. My own keys have a contactless entry access fob for my workplace.
That’s four on my keys, plus a flashlight. Your phone has many more: bluetooth, gps, nfc, wireless charging, 3g, 4g, wifi, cameras galore, possible IR tx/rx, broadcast receivers. A lot of our credit cards and IDs have contactless functions built in.
Right, if someone is beaming a high-power laser at you, that makes for a high-speed link. Nevertheless, standing at a random spot in the street we can probably rule that out, plus as you say there is a definite limit to the power and distance of urban microwave links, so it seems like it should be possible to work out a (semi-)factual answer.
The title line says “RF”, so let’s assume they just mean the radio portion of the spectrum, so excluding infrared and above. Microwave frequencies top out at around 300 GHz, depending on whom you ask, and as a rough ballpark estimate, information content is equal to bandwidth, so you could get 300 gigabits per second from that section of the spectrum.
It’s not that simple and it depends on the type of modulation and that affects bandwidth. Voice is not the same as data-- both could be considered information. If transmitting on 300Ghz it wouldn’t use up the entire RF spectrum. Other “information” could be being transmitted on other frequencies, and would add to the “density”.
It’s a difficult question to quantify, over the entire RF spectrum. Power, frequency, and modulation type are all factors.
I read somewhere a while ago that data will eventually be limited, over fiber, based on quantum effects. I can’t remember the article or find it in a brief search.
Yup, definitely not that simple, which is why I labeled it as a “rough ballpark estimate”. Most notably, I glossed over the signal-to-noise ratio. But it does include this:
Bandwidth is literally the width of the spectral band you’re using, as measured in frequency. If you’re transmitting at only exactly 300.000000000 GHz, then you don’t have any bandwidth, and can’t transmit any information. If you’re transmitting in the band from 290 GHz to 300 GHz, then you have 10 GHz of bandwidth. And if you’re transmitting in the band from 0 to 300 GHz, then you have 300 GHz of bandwidth. That figure is already assuming that you’re using all of the spectrum up to that point.
(E.g.) the Shannon–Hartley channel capacity, say of a single noisy channel, is something like B\log(1+S/N), where B is the bandwidth and S/N is the signal-to-noise ratio. But what kind of S/N can we expect for people talking on their phones in average conditions? 10 dB? Certainly you get something from that logarithmic factor but it cannot realistically be that big.
Most of our modern data is compressed, so assuming pretty efficient compression codecs, we are getting closer to the Shannon limit. That is certainly the goal of every communications engineer.
Not that long ago we made terrible use of bandwidth. But there are serious economic advantages to getting as close to the boundaries theory gives us as we can.
So in the limit we look to the noise floor across the bands we use, and then look at the power we can receive across the bands, and integrate the difference.
Power is remarkably low for much of what we use. But the bandwidths are getting huge.
There are really two related questions here.
One, theoretical maximum bandwidth, and
two, amount of data actually being conveyed on average.
Things like WiFi and cell phone networks normally carry only a fraction of their potential bandwidth, I guess. Though there is presumably a ‘baseline’ load of administrative stuff… things like ARP packets etc.
The other issue is - what is meaningful data? Do you need the exact voice transmission? Or the text of the dialog? How much can you degrade the voice channel before you lose all speech context - inflection, intonation, accent? Yjr old phone system was about 8K bandwidth, while some modern transmissons can have audio near CD range in stereo. Similarly, talking heads on a news broadcast barely move, and the compresson algorithm still resend the background - ralrely changing - over and over. Those key fobs etc. probably only transmit irregularly, so data varies. Every phone or wifi device is transmitting a small regular “hello, I’m stiill here” (maybe once a second) but the data rate jumps bigly when downloading a file or web page.
You would hav to make assumptions - say 5 cell towers, 1,000 phones in range, 100 phone calls and 100 active browsers, say 25 to 50 wifi in a dense urban area, roughly 50 active users on those wifi, 300 cars within range, Let’s say 30 FM radio stations, 20 AM, 20 Digital TV braodcasts…
They you have to figure a lot of geosynch satellites up there are beaming a complete panalopy of TV and such across the continent. assorted telecom systems, are there still microwave network towers, not to mention shortwave and police band and anbulance and other private networks, etc.
Maybe the simplest calculation is to consider the entire bandwidth, from 550kHz to 300MHz as fully occupied and then make a rough estimate how dense the usage is…
Any half-decent video compression algorithm will handle a static background behind a mostly-stationary talking head a lot more efficiently than just resending the whole image every frame. How precisely it’s done will vary from one algorithm to another, but in any algorithm, a nearly-static image will be a close to ideal case.
That is rather what I was thinking originally.
Which is why I suggested a more real-world thought experiment of assuming you have a receiver for all RF bands in typical use.
Obviously not all bands are fully used at all times. And there is some ambiguity about what constitutes ‘real’ information… does the admin and location data for packet-switching networks count, for example?
Something like H.264 will use B-frames - which are frames that encode only the differences between (hence B) two I-frames (Intra-frames) - which are standalone frames - looking both forward and backwards in time relative to the two I-frames. I-frames may be a couple of seconds apart, except at scene changes. So a background scene might only be sent every few seconds, and I-frames are also compressed. A talking head can compress pretty well. Some news shows are clearly very heavily compressed, with noticeable artefacts if anything moves to fast.
Of course one could do much better. Once could just send the static background once, along with a 3-D model of the presenter, and just send animation commands to make it look like they are speaking. Or just send the transcript and have the whole thing animated, along with speech synthesis, in the viewer.
5G goes up to 52 GHz. Even the low-frequency bands go up to 6 GHz. Starlink (satellite) will eventually go up to ~90 GHz.
Also, the basic Shannon limit assumes a single receiver, but in fact you can do better with multiple antennas (called MIMO for multiple input, multiple output). In some sense, this is the recognition that radio is not simply point-to-point, but area-to-area. So you have to consider the size of the receiver–perhaps modeled at a ball of some finite size that receives anything that passes through the surface.
True. Talking about the information density ‘at a point’ is really just a figure of speech.
To pose the question more accurately one would indeed need to specify the size of the receiver.