-
-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a _network_process()
method to complement _process()
and _physics_process()
#2020
Comments
is it maybe possible to define your own process like functions where they update at a custom rate, that way we could implement our own loops, i mean you could do it with a timer but maybe doing it this way its more eficient. |
Updating a GUI in |
This would be very useful for multiplayer games, especially server-side. I will rake a stab at this. |
Usually, however, just a custom There are many ways to support this feature in Rather, I think would be more useful a |
Isn't that a Timer node? 🙂 The only difference is that time is specified in seconds rather than Hz, but it's only a different way to specify the time unit. |
Yes, exactly, something like the Then, you can simply make that a singleton; The nodes that needs it, can just do: func _ready():
MyNetworking.connect("process", self, "_network_process")
func _network_process(delta):
# Make sure it's ticking at 60Hz.
assert((1 / delta) == 60) |
@AndreaCatania i don't entirely agree with your assessment here. In mant networked games such as MMOs that use basic client side prediction and server reconciliation, emmission rates from the client are irrelevant. The only rate that matters really is the tick rate of the server, as this is what determines the amount of egress data used. For other clients, basic entity enterpolation is enough because seeing them x ms behind is fine. In some other more advanced approaches sure, this might not be enough, but i wouldnt go as far as to call it useless. The question is, do we want to try to kill two birds with one stone here or treat them as separate problems Edit: reading @AndreaCatania response again perhaps i am misunderstabding. Are you saying the flaw with this approach is not being able to adjust the tick rate on the fly? |
What I'm trying to say is that I feel it like too much restrictive for a networking tick, to be really useful.
The node, as proposed here #2020 (comment), is much more easy to implement and doesn't suffer of the above issues, making it more useful. So putting on the scales the |
Hmm... Ill try to reply in the order of your concerns
In your singleton idea, whow would the signal emission be controlled? That is, when would the singleton emit the signal? I think the goal here is to provide a simple, out of the box mechanism that solves this common problem (one novic multiplayer devs usually don't even know they have) in a convenient way for the 80% case. As complicated as network cide is, i dont think we'll ever get sometging that works for 100% of use cases, and in those outluer cases, they can do something more custom. |
If I got it right, what you are trying to achieve with this proposal is an out of the box mechanism for a simple problem, call Your intention is to just use
To me this, doesn't solve the 80% of the issues that you have to solve when you deal with a networking game that usually use more complex algorithms than just sending inputs at a fixed rate.
One important thing to highlight, is that while you can have what
|
@AndreaCatania i think there might be some misunderstanding of what this proposal is for. This proposal is intended more for the server side than the client side. It doesn't have much to do with sending rpcs, but instead, it provides a way for servers to tick at a constant rate that is different from the physics interval. This could be useful, for example, when processing input messages from a client. You usually only want to process the serverside input buffer and replicate back to clients at something like 15fps, for instance to save on network bandwidth. Edit: if this was a part of Node instead of a singleton, it can still distinguish between server and client to react accordingly, if for some reason you did want to use it on client (i still dont know why you would though) Also, it fits better than a singleton. There are currently no existing out of the box autoloads and having a node that only works as a autoload would be not only unconventional but also difficult document. There's also a UX problem to solve there that is non-trivial. |
If, for process inputs you mean read them, pack in a buffer, and provide it as inputs for the following It's not as simple as just sync (/process) an input buffer at fixed rate and I've the impression that the scope in which this feature will be useful is really little. But, I don't really know how you will use it, so I'm probably wrong, so can you please explain (using pseudo code) how it will be used?
The engine should not provide support for not well-defined workflow, as core features. Rather, it should provide the tools in the form of |
Ah! I see what you are saying now @AndreaCatania . I believe you are correct in this in fact. Hmm instead of a singleton node, i wonder if we could just expose something in NetworkPeer or MultiplayerAPI that could be hooked into from gdscript. |
I think this proposal may actually just lead to more confusion than anything else.
I don't think the above is true. This could be implemented with a global timer, or controller node with _process(delta time) and an accumulator for example. Server tick rate (simulation rate) and broadcast (send or sync) rate are often different. These two things should be independent. They can be the same, but often they are not. Often it's desirable to have server tick rate be relatively high (the server can receive input at any time asynchronously. So higher tick rate means smaller worse-case additional latency between receiving a client input, and processing the input. 60hz would be a maximum delay of 16ms if a input has unfortunate timing and is received just slightly after input processing on the server). And send/sync/broadcast-rate is sometimes not desirable to have fixed globally (same for each client connection). If an individual client has a low bandwidth, you may not want to exacerbate a congestion problem by throwing the same amount of data at that client (more than they can handle). In which case the game server for that particular client connection may want to have some basic congestion avoidance which could bring that particular clients send/sync/broadcast rate down.
I have not heard of the above being done. I'm not sure this makes sense to me. What does 'network FPS' actually mean in the above context? If it means actual game 'simulation rate' (tick rate), then the benefit of reducing it dynamically is just to lower CPU overheard (at the cost of additional latencies introduced, and the complexity of syncing this change in real-time among clients and server). If it means 'network send rate', then I don't understand how decreasing this or increasing it changes 'network smoothness' (this term in of itself needs a much better definition in order for discussion to happen) for clients or servers unless egress bandwidth on your game server is an issue, which it's not likely to be in the suggested scenario. I just think #2020 (comment) basically covers this proposal no? |
Here, "Network FPS" refers to both the server physics simulation rate and network update rate. The main goal is to save on CPU resources, so maybe lowering only the server physics simulation rate would make more sense. |
A common use case for us in previous games has been to have an extremely high physics tick rate (ie 240hz) and a low network broadcast rate (ie 30hz). I don't think have a seperate process function is right here. It is very simple to build your own time accumulator in any given _process. We have done that already and likely would not use network_process simply because we would have to code review the whole system to determine if it has any problems, and if it did, we would then have to engage in the upstreaming process. My opinion on this is that this is too simple of a feature to add value. If a team is sophisticated enough to implement dedicated server based multiplayer, they are simple enough to figure out how to call a function at regular intervals. |
I don't want a extends Spatial
func _ready():
Engine.iterations_per_second = 120
Engine.target_fps = 2
var peer = NetworkedMultiplayerENet.new()
if "--server" in OS.get_cmdline_args():
peer.create_server(50000)
else:
peer.create_client("localhost", 50000)
peer.transfer_mode = NetworkedMultiplayerPeer.TRANSFER_MODE_UNRELIABLE
get_tree().network_peer = peer
func ping():
rpc_id(1, "pong", OS.get_system_time_msecs())
remote func pong(time):
var latency = OS.get_system_time_msecs() - time
print("one way latency %s ms" % latency)
func _physics_process(delta):
if get_tree().get_network_unique_id() != 1:
ping() and then run the server and client at some a random internal apart (bash |
@DanielKinsman I don't think there is a way around this. Setting To reduce CPU usage, you can still reduce |
I'm not actually running at 2fps, it was just to highlight the issue by taking it to the extreme. In my testing I am running a 100Hz monitor and a 60Hz physics tick. Due to timing of those windows on average it means that sending rpcs from client In reality the multiple game clients and non-dedicated servers will all have their own weird and wonderful and varying refresh rates and gpu capabilities and setting target fps or tinkering with iterations per second is not really an option. In my very naive thinking I am imagining that there is an rpc "message pump" somewhere in the engine code, and would like an option to have that pump run in step with physics process instead of process, or to have it disengaged entirely from either and (and let the user deal with their own thread safety issues). |
Running physics on a separate thread might achieve this – there's an option for this in the project settings, but it's not 100% reliable. See also #1288.
For comparison, what round trip time do you get if you set |
It is basically the same at 60Hz. I made a quick test project for comparing syncing physics via rpc or udp at https://github.com/DanielKinsman/godot_latency_test |
You can already disable automatic polling and poll the multiplayer API manually in Doing that in a separate thread (completely unrelated to engine iterations) is much more complex, and quite a corner case because you almost always want your network code (RPCs/replication) to happen in sync with either the physics or frame state. For these reasons, by default the connection is polled during process. In this sense a As explained above by others, achieving a lower network tick is already possible via a Timer and manual polling. That said, I believe the current implementation of |
Thanks, I wasn't aware of disabling automatic polling and that is exactly what I needed! When enabling for both client and server it gets down to 1 frame. There still seems to be some interaction with frame rate but it is acceptable, especially given how much easier it is to use the inbuilt high level networking. physics.mp4 |
_physics_process sadly tied to frame rate(input processing too), its behaves more like substepping than something being called in regular intervals. I wanted to used it to reduce/untie input and network latency from rendering, but sadly its not designed that way. If you have 10 FPS, and 100 physics rate, the engine will run the physics in a loop 10 times immediately after the rendering without any delay, then idle wait 10 ms until the next frame can be rendered. The physics frames will be spaced out with whatever time it took to calculate physics, which can be close to 0 ms. It would be nice to have a way to enable evenly spaced out physics steps instead of the current way. Or even have some kind of busy waiting timer(current timer is also tied to the framerate/gametime if the physics gets ahead more than 8 frames) or a constantly running node, and let us manually call physics process + network polling from that. |
For input processing, this is being tracked in #1288. Running network processing at a higher rate than rendering is interesting, as it can allow for significantly lower ping on 60 Hz monitors with V-Sync enabled. That said, disabling V-Sync and targeting higher framerates will give you similar benefits, on top of also reducing input lag (at least until #1288 is implemented). This is what most competitive players do anyway 🙂
See #1893. #2821 may also be handy here, as it'll let you simulate as many physics ticks per frame as you'd like. |
Closing, as I no longer think the original proposal is so useful now that we have MultiplayerSynchronizer. Also, the workaround is only 2 lines of code. |
Describe the project you are working on
The Godot editor 🙂
Describe the problem or limitation you are having in your project
There is no built-in way to send network data at a different rate than the physics update rate. However, you may want to do that to have accurate/low-latency client-side prediction but keep the network bandwidth usage low.
While 60 Hz network updates are a good baseline nowadays, large-scale/MMO games may have to settle for 30 Hz or even 20 Hz updates to keep the bandwidth usage manageable.
Describe the feature / enhancement and how it helps to overcome the problem or limitation
Like
_process(delta)
and_physics_process(delta)
, call_network_process(delta)
every1 / network_fps
seconds if it's present in a node script.Exposing the network FPS as a global value also allows easily changing the value to accomodate server load or playing conditions. This is often done in battle royale games to increase network smoothness as the number of alive players decreases.
Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams
Add
Engine.network_fps
(integer) to complementEngine.physics_fps
. Also add the associated project setting like for physics FPS.**: If we rename
Engine.physics_fps
toEngine.physics_ticks_per_second
, we should nameEngine.network_fps
Engine.network_ticks__per_second
instead.The default value would be 60 (identical to the physics FPS in 3.2.x). Unlike physics FPS, it will not vary depending on the monitor's refresh rate (if support for that is implemented in 4.0).
If this enhancement will not be used often, can it be worked around with a few lines of script?
Yes, but it's not exactly trivial to do so over a whole codebase. For instance, you could use the following code:
On top of that, the above code snippet will break if variable physics update rate is implemented in 4.0.
Is there a reason why this should be core and not an add-on in the asset library?
This is core engine functionality that can't be implemented at low-level by an add-on.
The text was updated successfully, but these errors were encountered: