at least, if it’s cable. it’s all just “data,” your internet feed occupies a few “channels” in the cable signal, voice another couple of “channels,” and the rest for video.
It’s still ultimately voltage fluctuations though, I think.
I know that in digital design, a certain voltage represents a 1, and the lack of it represents a 0, typically. So your flip-flops and NAND gates and all the rest all work at that level as your basic building blocks of digital circuits.
Are you familiar with data packets? It doesn’t actually send three signals at once. It sends different packets to different devices, and handles the transfer so quickly that it is not noticable. Landline phones on a triple play package are almost all VOIP now, so the phone is just another internet device as far as the network is concerned.
Not just AM (Amplitude Modulated), but modulated nonetheless.
I don’t know the state of the art, but it appears that the digital modulation schemes in use encode sequences of bits in shifts of RF signal phase and amplitude. All that’s being pushed down the wire is bits, in the form of these shifty RF signals; the consumer-end gear is converting the bits to the equivalent expected analog signals, such as telephone voice or cable TV video signals. (Or, just introducing the bits to the “WAN” side of a router device for cable internet).
Tutorial. Can’t vouch for it, but it looks to my untrained eye to not contain any overt lies or egregiously out-of-date info. (I found it googling “cable network modulation scheme”).
Ethernet uses baseband communication. So the ones and zeros are (more or less) directly put on the wire as different voltages. The different flavors of Ethernet do this in different ways. For instance, the old 10 Mbps Ethernet uses Manchester encoding, where a 0 bit is encoded as the voltage going from hight to low and a 1 bit as the voltage going from low to high.
Your cable internet and digital cable TV are transmitted on relatively narrow “channels” the same way TV is broadcast over the air. It’s a bit like someone with a really low voice and someone with a really high voice talking at the same time: you can filter out high or low and thus only hear the signal you want to hear. In radio, there are two main ways to modulate an audio signal on a carrier wave: by varying the volume (amplitude) of the carrier (amplitude modulation, AM) or by varying the frequency of the carrier (frequency modulation, FM). In digital, there’s also phase modulation, where the phase of the signal is shifted. AM and phase modulation are often combined so it’s possible to put multiple digital bits into an analog cycle, so the MHz can be lower than the Mbps. So a 30 Mbps digital TV or cable internet signal can fit into 6 MHz, so if it’s modulated on (for instance) a 200 MHz carrier, it occupies 197 to 203 MHz. 203 - 209 MHz can then be used for something else, etc.
Lots of good info upstream, but folks are talking about this at several different levels of abstraction & complexity. The OPs question also mixes the levels up a bit.
Try this for an intro:
At the electrical level, there’s just fluctuating voltage.
At the next level up, those fluctuations encode both phase shifts of a carrier and amplitude shifts of the carrier.
At the next level up, those shifts are decodable by a computer into different “channels”, i.e. logically distinct data streams of ones & zeros.
(skip a couple levels here for brevity)
At the next level up, a data stream of ones and zeros is decodable by a computer into a stream of IP packets addressed to specific device(s) sharing the wire.
At the next level up, a particular stream of IP packets addressed to a particular device, e.g. a phone or a TV or an internet device is treated by that device as a stream of TCP data packets.
At the next level up, the particular device extracts the payload from the TCP packets and converts that payload to audio or video or web pages or email or … according to the logic of the device, protocol, etc.
Armed with this crude outline, maybe we can find out where the OP wants to dig deeper.
There are three electrical properties you can change on a wire such that a change at one end can be detected at the other end.
These are amplitude, frequency and phase shift. A copper wire will pick up signals by itself from radio sources (noise). But we can also inject a signal and control these three properties
A reliable scheme for generating these changes and detecting them at the other end is a modulation scheme. There are lots of them…
The earliest examples were analog, continuously varying streams of data like voice and later video. They were superceded by digital modulations where the stream of data is turned onto numbers, the number is sent (as a variation in a combination of amplitude , frequency and phase shift) picked up the other side, and turned back into data. Digital has the advantage of being able to include data that detects and corrects any errors. Signal processing chips get better and better at using complicated modulations to extract as much carrying capacity from a wire. My telephone wire used support one voice call and I could use it for fax or a modem. These days the same wire supports a megabits of broadband. Same wire, but my broadband router and the equipment at the ISP use much cleverer modulation and encoding schemes to extract as much data carrying capacity as possible out of the wire. But it is still the fundamental electrical characteristics of Amplitude, Frequency and Phase Shift being varied.
You can see the signals on a wire using an oscilloscope or spectrum analyser and you can spot modulations being used. It is much the same with radio and satellite communication, except different modulations are used.
The first modulations were just switching the circuit on an off for different durations of time. Add an encoding scheme like Morse and that made the telegraph system. The same was done with radio by making a big spark at the transmitter,that could be detected by a reciever. A bit like when lightening strikes and you hear it as short burst of static on an FM radio. Very simple, and messages would interfer with each other on radio, but it worked.
We have since had about 100 years of research and development creating these modulation and encoding schemes now and there are a zillion patents.
The Digital Signal Processor chips that are in every Ethernet, TV, Radio, Broadband router, satellite receiver are one of the great inventions of late 20th century.
TCP is only one option, and not one used for stuff like telephony or television. Mainly, it’s used to move files like web pages around the Internet in a reliable fashion; there are a lot of instances where TCP would be exactly the wrong choice, and a lot of instances where it would be more trouble than it’s worth even if it would make sense in theory.
Also, cable TV doesn’t use IP packets. TV has its own needs and, therefore, its own protocols.
So you’re saying sensible things, but you’re using specific examples too broadly. It’s like saying that traffic across a bridge consists of Ford F-150 pickup trucks, as opposed to vehicles in general: Cable triple-play isn’t Texas. There are other options.
However… re-reading the OP question, it sounds almost like he may have a LAN cable coming out of his wall (a-la AT&T U-Verse) where it’s all digital.
There isn’t really any multiplexing going on in that case, just ethernet packets with different destinations and information within.
Further down the line, it gets translated from cabled ethernet into VDSL by the DSL modem, and probably to some sort of fiber at the DSLAM (where the actual multiplexing is done).
Alternatively, we can say that there’s only one property: amplitude. A 100 MHz carrier wave, for instance, has an amplitude which is just a sine wave at that frequency.
A typical coaxial cable might carry signals up to around 1 GHz. It is possible, in principle, for a receiver to sample the amplitude at a couple of gigasamples/second and extract all the information in the signal.
However, receivers typically can’t handle that high of a sampling rate. So in practice they generally use filters to select a portion of the frequency range (a “channel”), and downconvert it so that the signal is now centered at a lower frequency (which is easier to deal with). The phase information comes along for the ride.
To be clear, these are just two ways of looking at the same thing. Amplitude-only is arguably the “deeper” way of looking at it, but in practice it is generally more useful to consider frequency and phase as well.
Note that there are at most two clock periods of the same value in a row. It’s pretty easy for a receiver to distinguish between one and two clocks between transitions, and hence Manchester encoding is pretty straightforward.
With other encodings it becomes more problematic. How do you know the difference between 1,000,000 one-bits vs. 1,000,001 one-bits? One solution is to pass a clock on a separate line, though that’s also wasteful.
Modern self-clocking signals use coding schemes which are guaranteed to provide a certain maximum number of clocks between bit transitions. One common type is 8b/10b, which encodes 8 bits into 10, and guaranteeing only a few bits in a row with the same value. PCI Express 2.0 used this scheme.
Another, similar, type is 128b/130b coding, which as you might guess encodes 128 bits into 130. It requires a receiver which can detect the difference between 127 vs. 128 bits in a row. This is still not a challenge for a modern receiver; it means your clock error needs to be perhaps 0.1%. PCI Express 3.0 uses this scheme.
You can plug an inexpensive usb wifi dongle into your computer, run some special software and listen to all the singnal it hears up to its limit of 2Ghz or so. These gadgets don’t try to tune into particular frequencies, they just listen the wire and turn the signal amplitude into a number as fast as they can. Then the software analyses the data to decode the frequencies, phase shifts, modulations and encoding. Maybe even the messages contained therein. An absorbing hobby.