Latency Monitor

  • Based on a discussion on Discord, I've created a latency monitor for Roblox. It really shouldn't be necessary for most games but can be useful for situations where you want to validate which user actually performed something first. It may also be useful for when you want to better synchronize various events across all players.

    It's not perfect since Roblox seems to only update network data every other frame. This means that the lowest latency you're likely to see is about 1/30th of a second (or about 33.33333 milliseconds). Because of this, actual latency may very well be lower than reported (more likely to be 2-15 milliseconds). However, even if it is lower, Roblox doesn't allow you to take advantage of it anyway.

    I've uploaded the module script files and preconfigured place file to GitHub.

  • How will this help the game by adding more requests?

    It would be best to check the latency when a request is made.

  • TL;DR - One latency request per second accounts for less than 0.05% of CPU per second which will in no way harm game performance.

    While it is true that it would be "optimal" to check latency when a "more purposeful" request is made, adding 1 periodic request with a payload of 8 bytes (Lua typically uses 64-bit floats for both integer and floating point values) at rate of far less than once per frame, will in no way harm game performance. The one exception to this is if the server or clients are already running at their maximal processing capability and therefore cannot afford the CPU cycles necessary to buffer the incoming network data into Roblox-accessible RAM.

    If that is the case, then the game already has a critical design issue in that it needs further optimization as no game should ever run at 100% processing capacity. If for some reason the game cannot be optimized further to reduce requirements, then that simply pushes the system requirements of the game higher.

    As a rule of thumb, it requires about 1Hz of CPU capability per bit of data transferred at the hardware level. So we have an 8 byte payload coupled with a typical 52 byte UDP overhead giving us a 60 byte packet (or 60 bytes x 8 bits per byte = 480 bit packet). Even if we were to make a RIDICULOUSLY aggressive assumption that the machine's network driver, OS process and thread scheduling and Roblox combine to introduce an additional 100,000% of overhead per packet (actual overhead would be MUCH MUCH lower than this) we're still only at 480 cycles + 100,000% = 480,480 CPU cycles (480,480 Hz or 480.48 KHz). If the latency request was being sent once per second on a 1 GHz machine (the lowest spec Apple device currently supported by Roblox, minimum desktop spec is 1.6 GHz), that's 480,480 cylces out of the 1,000,000,000 available every second (or 0.048048% of available CPU).

    If we're dealing in TCP packets, that's 8 bytes + 58 byte overhead = 66 byte packet. 66 bytes x 8 bits per byte = 528 bits = 528 CPU cycles. 528 CPU cycles + 100,000% CPU overhead = 528,528 cycles. 528,528 cycles / 1,000,000,000 available cycles per second = 0.0528528% of available CPU.

    How exactly is this even remotely harming the game? If someone's game is anywhere near hinged on 0.0528528% of processing power then they are doing something VERY wrong. Mind you, the actual processing requirements of this latency check are "MUCH MUCH" lower. I'm almost tempted to profile it but I feel I've spent enough time explaining the significance of a single packet transmission. Actually go ahead and even double it for the latency request packet sent and reply received... It still doesn't matter.

    Now, if Roblox is adding an additional 1KB of overhead per RemoteFunction invocation then we may have something to talk about but then that would only suggest some very bad network stack handling on the part of Roblox's developers.

  • I first want to add the Robox uses UDP information here and in most cases hardware is not the cause main cause of latency. I also do not know how remote events and functions are managed within Roblox meaning that you cannot simply assume how this is done.

    Roblox gives the target of 50kbps for both upload and download (see ingame stats) and most games will easily use this amount of bandwidth for normal gameplay. It is always best to limit the amount of requests which is why I said it would be best to check the latency as a remote function is invoked.

  • TL;DR - One latency request per second accounts for less than 0.05% of CPU per second and 0.117% of bandwidth which will in no way harm game performance.

    As I stated, I agree with you that it is "'optimal' to check latency when a 'more purposeful' request is made".

    You asked though, "How will this help the game by adding more requests?"

    My first post addressed a couple of useful scenarios while my second addressed how it doesn't hurt the game in terms of processing performance. I admit I did not address it in terms of bandwidth though. No where did you nor I mention the "cause of latency". That's not what this is about unless you are suggesting that adding a check for latency adds additional latency. Is that what you're saying?

    In terms of the target network usage, to be accurate, its 50 KBps (180 MB/hr) not 50 kbps (22.5 MB/hr).

    So I went ahead and did a light "visual" profile of the network stats within Studio's View tab. In an empty place with a single player without the Latency Monitor, Roblox server averages 0.07 KB/s out and 0.07 KB/s in. In the same empty place with a single player with the Latency Monitor running, the averages are 0.09 KB/s out and 0.11 KB/s in. This suggests about a 60 byte overall payload to send the RemoteFunction:ClientInvoke() with no parameters and receive a single number value back. I could go further and use a network analyzer such as WireShark to analyze the actual protocol-level packets being sent but it already doesn't seem to be worth the effort. 60 bytes / 50 KB (50 x 1024 bytes) = 0.117% of target bandwidth.

    I won't argue the average bandwidth of Roblox games as I don't know that figure, however, I will say that there is MUCH that can and should be done to keep bandwidth usage low.

    Realistically speaking, the primary thing that concerns me about Roblox and it's network usage (with respect to FilteringEnabled being on) is Instance and respective physics replications from the workspace. This design model, building out the game world in the server's workspace and it being replicated to players automatically, is a primary reason why development on Roblox is so "easy". However, it's far from optimal from a networking or large/complex game perspective.

    This is NOT the way typical PC/console/mobile online games work. Typically, the vast majority of assets and the physics engine will exist almost entirely on the client. The server would be responsible for replicating position, orientation, transformation, physics input data and events for overall assets between clients not each granular primitive that makes up the asset. This can be done in Roblox too but is a more complex approach using local parts/models and handling the replication yourself. However, unfortunately, even with FE, the client will still attempt to send physics updates to the server although the server won't even accept the packets according to my tests.

    That said, very few games (even games like WoW, Battlefield, Halo, FIFA, Minecraft, etc) reach 50 KB/s of bandwidth.

    Ultimately though, speculative arguments won't get anyone anywhere. Actual profiling is where the truth lies. And so far, my point still stands... One latency request per second accounts for less than 0.05% of CPU per second and 0.117% of bandwidth which will in no way harm game performance.

    Some casual, unscientific reading:

Log in to reply

Looks like your connection to Scripting Helpers was lost, please wait while we try to reconnect.