Is there a limit for the speed of internet?

I think we are reaching a certain limit on internet speed - most servers can not send data as fast as your ISP provider can work. That is not the point, as they could build better and better server. Same goes for your computer speed - your hard drive can only write data up to a certain speed. What I am wondering is probably more related to Ping. Would data transmission speed be limited to the speed of light? If you are sending a humongous amount of data (terabytes) from, say, China to USA all that data would have to travel on a long way.
I am aware that this already affects space travel - it takes 1.3 seconds to light and radio signals to reach us from the moon. Could that become an issue on the near future when we have faster computers and internet providers?

The speed of light is already an issue in stock trading.

Not correct – the server is mostly limited by the speed of the connection to the internet, your local exchanger and how many people try to access that server at the same time.

A local hard disk drive, inside a modern home computer, can operate up to 6GB/s and your internet connection to your ISP is nowhere near this speed.
Currently the limiting factor is still the ISP – and the ISP to the server farms.

The current problem is that the internet is not fast enough for everyone using higher bandwidth internet applications like watching streaming video on demand at the same time. Internet connections are currently upgraded in most places to fibre optics, which should make things better, until we all want more data to be pumped through the exchangers.
Here is the current speed record

There are two different measures of internet speed, relevant for different purposes. The speed of light is relevant for latency, the time delay between any single piece of data leaving its origin and reaching its destination, but the amount of data doesn’t change that. The speed of light is not, however, relevant for throughput, how much data you can transmit at once.

Latency is mostly important for computer games. If you’re gaming with someone in China, and you see him poking his head out from behind cover and you shoot him, it could be very relevant if there was an extra half-second delay in between those events, because maybe in that time he shot you first. But throughput isn’t a very big deal for games, because it doesn’t actually take all that much data to transmit what you’re doing.

Throughput mostly matters for video. If you’re watching a movie online, it probably doesn’t matter at all to you if your entire movie is delayed by a couple of seconds relative to the server that’s sending it out. But if you can’t push data down the pipe as fast as it’s being consumed, you’ll need to wait for it to buffer.

If you’re talking about “terabytes of data going a long way”, it’s not clear which measure you’re interested in.

Additionally, some protocols (computer languages) wait for a receipt acknowledgment before sending more data. Computer A goes blahblahblahblahblah–wait, did you get all that? Computer B goes yep, then Computer A keeps talking. In those protocols, latency matters even more than usual because Computer A won’t keep talking until it hears back from B, so you’re affected by the speed of light and other latency-inducing factors going both directions. TCP works like this, and is used in situations where the accuracy and completeness of your data matters (a web page, for example).

By contrast, some other computer protocols don’t wait for acknowledgments. Computer A just goes blahblahblahblahblahblahblahblahblahblahblehblehblehblahblahblah and B just silently listens, going “yep, got that, got that, got that, oh, missed that but I’ll just ignore it, got it, got it, oops, missed another one, oh well, got it, got it…”. This is fine for things like fast-action computer games, where maybe a player’s movement will be off by a meter or two but will be corrected in the next second or so anyway; or alternatively, a streaming HD movie where a missed frame or two wouldn’t be noticed anyway. The upside is slightly reduced latency, but only where data accuracy isn’t a paramount concern. UDP works like this.

More info: http://www.cyberciti.biz/faq/key-differences-between-tcp-and-udp-protocols/

Most places in the US still use ancient, archaic coax and DSL lines for internet, with no upgrades to fiber even under consideration as far as we know.

And here are two links that further discuss the original question, with numbers:


http://rescomp.stanford.edu/~cheshire/rants/Latency.html

There’s definitely a ping limit, but for now it’s usually our equipment and not the speed of light that causes it.

As an aside, here is an interesting old discussion from another forum about whether quantum entanglement can allow faster-than-light communication. Wikipedia also discusses it (1, 2) in less understandable terms.

Yes but no. TCP has a wide window range, so that once things get going the sender can send a lot of data and keep the pipeline full without waiting for acks for each bunch. Latency matters only at first (due to “slow startup” to avoid congestion), and matters most when you use lots of short TCP connections rather than keeping one open and using it full-tilt. That is, for a connection to next door or to Australia, the Australia one will take longer to get up to full speed, but the full speed could be the same.

That depends on how far the server is. Under 1000 miles, you’re right. Over that, the speed of light starts to dominate.

Your reference says

Was it written in the 90’s? I get ping times of 20 ms and less.

Guess where I live?


 rtp-ads-174 17:11% ping -c 2 duke.edu
PING duke.edu (152.3.72.104) 56(84) bytes of data.
64 bytes from duke-web-fitz.oit.duke.edu (152.3.72.104): icmp_seq=0 ttl=239 time=4.40 ms
64 bytes from duke-web-fitz.oit.duke.edu (152.3.72.104): icmp_seq=1 ttl=239 time=3.93 ms

--- duke.edu ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1010ms
rtt min/avg/max/mdev = 3.930/4.168/4.406/0.238 ms, pipe 2
 rtp-ads-174 17:11% ping -c 2 udel.edu
PING udel.edu (128.175.13.92) 56(84) bytes of data.
64 bytes from copland.udel.edu (128.175.13.92): icmp_seq=0 ttl=237 time=19.5 ms
64 bytes from copland.udel.edu (128.175.13.92): icmp_seq=1 ttl=237 time=19.3 ms

--- udel.edu ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1016ms
rtt min/avg/max/mdev = 19.316/19.417/19.519/0.172 ms, pipe 2
 rtp-ads-174 17:11% ping -c 2 ucsd.edu
PING ucsd.edu (132.239.180.101) 56(84) bytes of data.
64 bytes from ucsd.edu (132.239.180.101): icmp_seq=0 ttl=44 time=86.7 ms
64 bytes from ucsd.edu (132.239.180.101): icmp_seq=1 ttl=44 time=84.7 ms

--- ucsd.edu ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1008ms
rtt min/avg/max/mdev = 84.788/85.745/86.702/0.957 ms, pipe 2
 rtp-ads-174 17:11% ping -c 2 ucla.edu
PING ucla.edu (128.97.27.37) 56(84) bytes of data.
64 bytes from www.ucla.edu (128.97.27.37): icmp_seq=0 ttl=45 time=76.2 ms
64 bytes from www.ucla.edu (128.97.27.37): icmp_seq=1 ttl=45 time=76.1 ms

--- ucla.edu ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1014ms
rtt min/avg/max/mdev = 76.113/76.161/76.210/0.280 ms, pipe 2


See a correlation there? Most latencies aren’t in the routers, once you get past 1K miles or so. Also, highest latency per router is closest to the user, especially residential.

Oops, I see I misread the ping reference above: they meant that when you ping Paris, it isn’t nearly as fast as you’d expect given the speed of light, which is correct.

The big problem right now is having enough bandwidth for wireless, and even that has a lot more to do with ownership rights than physical laws.

Still, it is the reason why it’s so hard to get true unlimited wireless Internet in the U.S.

Perhaps these answers will be obsolete by the time you read them.