I’ve got a Roku. It allows me to set the quality of the stream. I’ve set it to the lowest possible setting–0.3 mbps. My internet connection–confirmed at speedtest.net, is 3.0 mbps.
Yet I experience huge lag–pings of around 1000ms.
There is no difference, as far as I can tell, whether I put the Roku on 0.3mbps or 3.0mbps. The lag is the same. (The setting is definitely changing something–the stream quality changes visibly).
Why isn’t changing the stream quality changing my lag? And is there possibly some way to reduce my lag? I don’t think getting a connection with more bandwidth will help, since I’ve already seen that changing the bandwidth on the Roku itself makes no apparent difference to the amount of lag.
OK, so bandwidth refers to the amount of data being communicated over a time period, e.g., bits per second (bps), kiloBytes per second (kBps), etc. It’s a bit slippery in casual discussions, though – it might be better in this case to think of bandwidth as the maximal possible amount of data that can be sent in the given time.
Latency is, essentially, measuring communication delay; it refers to the amount of time it takes to send/receive a given chunk (amount) of data. If you ping a server, the data chunk is 1 ICMP packet; the result is the time (usually in milliseconds) it takes for the packet to be sent by you, travel to the server, have the server respond, have the response travel back to you, and finally have the response processed.
So, bandwidth is measured as data per time, while latency is just time. If the data chunk used for figuring latency is small, they are orthogonal: the available bandwidth will easily accommodate the amount of data, and the constraint will be the “speed” of sending/receiving. However, if the data chunk is large, it may have an effect on latency – for instance, trying to send 640x480 frames over a channel that can only process 320x240 frames in the given time. So, no, the advice given to you wasn’t necessarily bad; however, lowering the stream quality will, at some point, have little to no effect. With today’s standard hardware, that point is pretty high (I’d expect).
It could be many different things – the quality of the channel (e.g., noisy line), the communication software (e.g., the implementation of the TCP/IP stack), the server load (e.g., a DDOS attack), or even the network infrastructure (e.g., cable users sharing total bandwidth or a path with many hops). Each of these obviously have different solutions; unfortunately, I don’t have the experience to suggest any particular course of action.