What is the fundamental limit on game streaming?

CleoTheWizard@beehaw.org to Asklemmy@lemmy.ml – 29 points –

Let me clarify: We have a certain amount of latency when streaming games from both local and internet servers. In either case, how do we improve that latency and what limits will we run in to as the technology progresses?

45

You are viewing a single comment

Theoretically, the latency between the streamer and viewers could be zero or near zero.

For playing games online, the minimum possible latency is the speed of light delay. We’re pretty much already at the limit for that one, and we’re even using a lot of pretty clever techniques to mitigate latency such as lag compensation.

Ooh, we're not at the speed of light as a limit yet, are we? Do you mean "point A to point B" on fibre, or do you actually mean full on "routed-over-the-internet"? Even with fibre (which is slower than the speed of light), you're never going in a straight line. And, at least where I live, you're often back-tracking across the continent before your traffic makes it to the end destination, with ISPs caring more about saving money than routing traffic quickly.

For most of us, there is no difference though; you get what you get.

I live in a nice neighborhood but I won’t ever get fiber… we have underground utilities and this area is served by coaxial cable. There’s no way in hell they are digging up miles of streets to lay fiber; you get what you get.

My ISP latency is like 16-20ms but when sim racing it just depends on where the race server is (and where my competitors are). As someone on the US west coast, if I’m matched with folks in EU and some others in AUS/NZ, the server will likely be in EU and my ping will be > 200. My Aussie competitors will be dealing with 300-400.

It’s not impossible to share a track at those latencies, but for close racing or a competitive shooter… errrr that just doesn’t work.

The fact that I’m always at around 200ms for EU servers might be improved if we could run a single strand of fiber from my house to the EU sever (37ms!) but there would still be switching delays, etc. so yeah the speed of light is the limit, but to your point, there’s a lot of other stuff that adds overhead.

Theoretically it doesn’t really matter whether your connection is fiber or copper. Electricity moves through copper roughly at the same speed as light moves through fiber. The advantages that fiber has over copper is that it can be run longer distances without needing boosting, and that you can run an absolute fuckton more end-to-end connections in the same diameter of cable. More connections means less contention - at least at one end of the pipe. The problem then moves to the ISP’s routers :)

I’d say that the chances are actually quite good that you’ll get fiber internet within the next 10 years. Whether or not it improves your internet connection is another question entirely!

Right on man, thanks for the additional context/info. Much appreciated!

It needs less boosting, fiber still needs repeaters over sufficiently long spans.

Really the biggest advantage to fiber from a consumer perspective is that it's not subject to signal deformation and interference. You don't have nearly as many issues with fiber Internet as a result.

Sorry, what I wrote here was unclear, I wrote it needs less boosting in another comment, but re-reading this one, it does sound like I’m claiming it needs no boosting over any distance - that’s not what I meant though! I just meant that you can run an equivalent link without any boosting further than you could with copper.

Interference isn’t actually that big of a deal for Ethernet over copper, unless the installer does something silly like run UTP alongside high power electrical lines, or next to a diesel generator, or something. Between shielding, the use of balanced signals, and the design of twisted pair, most interference is eliminated.

Interference isn’t actually that big of a deal for Ethernet over copper, unless the installer does something silly like run UTP alongside high power electrical lines, or next to a diesel generator, or something. Between shielding, the use of balanced signals, and the design of twisted pair, most interference is eliminated.

This should be true, but in practice ... there are a lot more environmental factors that can and do impact a copper cables (which can result in some really wacky situations to diagnose, "this only happens on hot days when XYZ part of the line to your house expands"), and more installation errors (e.g., not grounding the wire). That doesn't matter much for TCP applications/protocols but for UDP applications/protocols that can all add up to be something that's observable in the real world.

You get a lot closer to "all" interference being removed with fiber ... and for most gamers at least, that's probably the most noticeable improvement on fiber vs "cable" service (other than perhaps a download/upload speed bump). Pings are in my experience roughly the same, though the fiber networks tend to fair a bit better (probably just from newer hardware backing the network).

It's becoming more of an issue too (in Ohio at least) because more and more ISPs are locking folks out of their modem's diagnostics, so they can't actually see that the modem is detecting signal quality issues coming into the house... I almost always recommend folks just go with fiber all the way into their house if they have the option, unless they just use the web and watch videos (in which case who cares, TCP will make it so you don't care unless it's really bad, and the really bad cases are typically fixed the first time the tech is out).

It's one of those things where there's not much of a benefit for consumers on paper (theoretically -- as you say -- you could have copper service that's just as good and fast as fiber) ... but in practice, fiber just saves a lot of headaches for all parties because of its resistance to interference and simpler installation.

Even with fibre (which is slower than the speed of light)

This makes no sense. Are you referring to the speed of light in a vacuum? Fiber transmits data using photons which travel at the speed of light. While, yes, there is often some slowing of signals depending on whether the fiber is single-mode or multi-mode and whether the fiber has intentionally been doped, it’s close enough to the theoretical maximum speed that it’s not really worth splitting hairs (heh) over

There are additionally some delays added during signal processing (modulation and demodulation from the carrier to layer 3) but again this is so fast at this point it’s not really conceivably going to get much faster.

The bottleneck really is contention vs. throughput, rather than the media or modulation/demodulation slash encoding/decoding.

At least to the best of my knowledge!

you’re often back-tracking across the continent before your traffic makes it to the end destination, with ISPs caring more about saving money than routing traffic quickly

That’s generally not how routing works - your packets might take different routes depending on different conditions. Just like how you might take a different road home if you know that there’s roadworks or if the schools are on holiday, it can be genuinely much faster for your packets to take a diversion to avoid, say, a router that’s having a bad day.

Routing protocols are very advanced and capable, taking many metrics into consideration for how traffic is routed. Under ideal conditions, yes, they’d take the physically shortest route possible, but in most cases, because electricity moves so fast, it’s better to take a route that’s hundreds of miles longer to avoid some router that got hacked and is currently participating in some DDoS attack.

That’s generally not how routing works

It is how it works ... mostly because what they're talking about is the fact that the Internet (at least in the US) is not really set up like a mesh at the ISP level. It's somewhere between "mesh " and "hub and spoke" where lots of parties that could talk directly to each other don't (because nobody ever put down the lines and setup the routing equipment to connect two smaller ISPs or customers directly).

https://www.smithsonianmag.com/smart-news/first-detailed-public-map-us-internet-infrastructure-180956701/

There’s absolutely nothing wrong with that topology - the fact that you seem to think that the design is a bad thing really demonstrates your lack of understanding here.

For example, have you never wondered why we don’t just connect every device in a network all together like a big daisy chain? Or why we don’t use a mesh network? There is a large number of reasons why we don’t really use those topologies anymore.

I don’t want to get into the specifics, but in general, the more networks a router is connected to, the less efficient it is overall.

The propagation delay is pretty insignificant for most routers. Carrier grade routers like those at the core of the internet can handle up to 43 billion packets per second, another hop is absolutely nothing in terms of delay.

For example, have you never wondered why we don’t just connect every device in a network all together like a big daisy chain? Or why we don’t use a mesh network? There is a large number of reasons why we don’t really use those topologies anymore.

Well daisy chaining would be outright insanity ... I'm not even sure why you'd jump to something that insane ... my internet connection doesn't need to depend on the guy down the street.

Making an optimally dense mesh network (and to be clear, I mean a partially connected mesh topology with more density than the current situation ... which at a high level is already a partially connected mesh topology) would not be optimally cost effective ... that's it.

the more networks a router is connected to, the less efficient it is overall. another hop is absolutely nothing in terms of delay.

Do you not see how these are contradictory statements?

Yeah, you'd need more routers, you have more lines. But you could route more directly between various points. e.g., there could be at least one major transmission line between each state and its adjacent states to minimize the distance a packet has to physically travel and increase redundancy. It's just more expensive and there's typically not a need.

This stuff happens in more population dense areas because there's more data, and more people, direct connections make more sense. It's just money, it's not that somehow not having fewer lines through the great plains makes the internet faster... Your argument and your attitude is something else. I suspect we're just talking past each other, but w/e.

I’m becoming more and more convinced that you don’t really know what you’re talking about. Are you a professional network engineer or are you just a hobbyist?

I wear a lot of hats professionally; mostly programming. I don't do networking on a day-to-day basis though if that's what you're asking.

If you've got something actually substantive to back up your claim that (if money was no object) the current topology is totally optimal for traffic from an arbitrary point A <-> B on that map though... have at it.

This all started with:

you’re often back-tracking across the continent before your traffic makes it to the end destination, with ISPs caring more about saving money than routing traffic quickly

And that's absolutely true ... depending on your location, you will travel an unnecessary distance to get to your destination ... because there just aren't wires connecting A <-> B. Just like a GPS will take you on a non-direct path to your destination because there's not a road directly to it.

A very simple example where the current topology results in routing all the way out to Seattle only to backtrack: https://geotraceroute.com/?node=0&amp;host=umt.edu#

The problem that I’m having (and why I asked that) is because I was assuming that you would have some knowledge which you don’t seem to have with a lot of my comments. I’m really not trying to be rude, but it makes it a lot more difficult to explain the flaws in your reasoning when you’re talking about topics that are beyond your knowledge as if you know them well.

I have explained the realities of the situation to you, if you don’t want to accept them, that’s fine, but you’re basically arguing with an expert about something you don’t really understand very well. I’m happy to explain stuff but you should just ask rather than assume you know better because it makes it much more difficult for me to understand the gaps in your understanding/knowledge.

So ultimately, for routers, we have a number of limited resources. Firstly, yes, interfaces, but also the usual stuff - CPU, RAM, etc.

Now, I mentioned before that routing protocols are very complex - they have many metrics which are taken into account to determine what path is ultimately best for each packet. This is a process which can be quite intensive on CPU and RAM - because the router needs to “remember” all of the possible routes/destinations a packet can travel, as well as all of the metrics for each destination - distance, delays, administrative distance, TTL, dropped packets, etc. and then make a decision about processing it. And it needs to make these decisions billions of times a second. Slowing it down, even a tiny bit, can hugely impact the total throughout of the router.

When you add another connection to a router, you’re not just increasing the load for that one router, but for the routers which connect to the routers which connect to those routers which route to the routers that route to that router… you get the idea. It increases the number of options available, and so it places additional burden on memory and processing. When the ultimate difference in distance even an extra 100 miles, that’s less than a millisecond of travelling time. It’s not worth the added complexity.

That’s what I meant when I said that an extra hop isn’t worth worrying about, but adding additional connections is inefficient.

but you’re basically arguing with an expert about something you don’t really understand very well. I’m happy to explain stuff but you should just ask rather than assume you know better because it makes it much more difficult for me to understand the gaps in your understanding/knowledge.

Okay, I'll apologize... For context though, in general, it's the internet and it's hard to take "expert" at its word (and even outside of an online context, "expert" is a title I'm often skeptical of ... even when it's assigned to me :) ). I've argued with plenty of people (more so on Reddit) that are CS students... It's just the price of being on the internet I guess, ha

I'm still not sure I agree with your conclusions, but that's mostly healthy skepticism... because your argument isn't tracking with ... well ... physics or distributed computing... more direct "routes" and taking load off "routes" that aren't the optimal route typically is a great way to speed up a system. It's definitely true that doing that adds overhead vs just having a few "better" systems do the work (at least from some perspectives), but it's hard for me to imagine that with sufficient funds it truly makes it worse to give routing algorithms more direct options and/or cut out unnecessary hops entirely.

Reducing "hops" and travel time is kind of the bread and butter of performance work when it comes to all kinds of optimizations in software engineering..

If you want me to ask a question ... what's your explanation for why there are so many more connections in the north east and west coast if more connections slows the whole system down? Why not just have a handful of routes?

You can’t really compare small-scale clusters of highly available services with the scale of the entire Internet, it’s just an entirely different ballgame. Though even in small scale setups, there is always a sweet spot between too many paths and not enough paths - VRRP (which is the protocol usually used for high availability) actually has quite a big overhead, you can’t have too many connections on the same network or it causes lots of problems.

Internet scale routing usually uses BGP, which also has quite a heavy overhead.

I guess all you need to understand is that routing isn’t free, and the more routes, the more overhead. So there’s always going to be a point where adding more routes just makes things slower rather than faster. And BGP… is just a bit of a mess, right now, honestly. The BGP table has grown so big that a lot of older devices can’t keep it in fast memory anymore, so they either have to be replaced with newer hardware or use slow memory (and therefore slow processing of packets). So it’s not really in everyone’s best interests to just keep adding more routes. It’s harder and harder to justify.

why there are so many more connections in the north east and west coast if more connections slows the whole system down

I’m not from the US, so at best it would be an educated guess.

Firstly, it’s not as simple as just “more connections is more slow”, it means there’s a greater overhead. If the improvement from adding another line is greater than the overhead, then it can be worthwhile. For example, imagine a simple network with three routers, A, B and C, where A is connected only to B, and C is connected only to B, meaning that B is connected to both A and C. If there is a large amount of traffic between A and C, it may be worth adding a direct connection between them. If there isn’t, then it’s probably not worth doing.

I guess it’s a bit like adding a new road between two existing roads. Is it worth adding a junction and a set of traffic lights to some existing roads, or would that slow down traffic enough not to be worth doing?

Maybe, since you work with software more, it would make sense to put it this way: why don’t you create an index for every single possible column and table in SQL?

Or just look at it like premature optimisation. There’s a saying about premature optimisation in software engineering! ;-)

Another thing to keep in mind though is that there’s definitely still quite a few bad decisions still kicking around from when the internet was new. It takes time and effort to get rid of the legacy junk, same as in programming.