Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

client function in ReplicatedClients is O(n) #326

Open
phisn opened this issue Sep 12, 2024 · 7 comments
Open

client function in ReplicatedClients is O(n) #326

phisn opened this issue Sep 12, 2024 · 7 comments

Comments

@phisn
Copy link

phisn commented Sep 12, 2024

Is there any reason why a vector instead of a hash map is used here?

pub struct ReplicatedClients {
clients: Vec<ReplicatedClient>,
policy: VisibilityPolicy,
replicate_after_connect: bool,
}

I have the following use-case: Entities & Players have a transform and move around. When entities get near players they should become visible. Given I have a lot of entities moving around, it seems more realistic that the entity notifies the player if it gets near enough. Therefore I would like to lookup ClientVisibility efficiently.

The required function is change_visibility(entity, client_id, state);. We can implement it the following:

  • Just call replicated_clients.client(client_id).visibility_mut().set_visibility(entity, state).
  • Call replicated_client.visibility_mut().set_visibility(entity, state) when iterating over all clients using replicated_clients.iter_mut(). Since the number of roaming entities is large it is not visible to do this without caching proximity changes during iterating the entities. We could use a HashMap<ClientId, Vec<(Entity, bool)>> which seems unnecessary difficult considering the underlying structure of ReplicatedClients.

I suggest making Clients a HashMap to an enable a broader range of use-cases. I am open do fix this myself if there is no real reason against this suggested approach.

@Shatur
Copy link
Contributor

Shatur commented Sep 12, 2024

Hi! I used a Vec since I heavily iterate over the clients inside replication system. So HashMap will affect the replication performance.

But I also see your problem. I need to think about it. Maybe you could iterate over clients first and cache the necessary data on your side?

@UkoeHB
Copy link
Collaborator

UkoeHB commented Sep 12, 2024

bevy_replicon_attributes might be a good solution to your use-case. You'd probably want to do perf tests comparing it with a raw approach. And also perf test both with few and many clients.

How to implement:

  1. On every entity add a Sector component that contains an id for the section of map it current belongs to. You could also do Sectors if your entity can occupy multiple sections.
  2. When an entity's Sector changes, update its visibility condition: vis![Sector(current_sector)]. This will trigger the backend to update its visibility for all clients (those who lost and those who gained visibility).
  3. When a player moves, add and remove Sector visibility attributes based on which sectors it can see that changed from the previous tick (ClientAttributes::add and ClientAttributes::remove).

Since each player and entity will change sectors at most a few times per second, you avoid the worst-case of updating visibility of everything every tick. You also decouple entities from clients, allowing more complex visibility calculations for players (e.g. what if a big rock occludes entities?).

@phisn
Copy link
Author

phisn commented Sep 13, 2024

bevy_replicon_attributes might be a good solution to your use-case. You'd probably want to do perf tests comparing it with a raw approach. And also perf test both with few and many clients.


What about players moving frequently between sector borders. In my indented approach I would make entities inside a 3x3 range visible and entities outside a 5x5 range invisible. This is possible using your approach when the player keeps ownership over all sectors in a 3x3 range (keep in mind that all these 9 sectors need to lookout for a 3x3 range). Since I consider this a hotpath it would be good to have more control over the implementation.

Hi! I used a Vec since I heavily iterate over the clients inside replication system. So HashMap will affect the replication performance.
But I also see your problem. I need to think about it. Maybe you could iterate over clients first and cache the necessary data on your side?

I think O(1) access is very useful since getting entities by id is also used inside the replicon. I would suggest one of the following approaches.

  1. Gauge the performance benefit of using Vec over HashMap negligible. When we consider that networking & physics are included this might be true.
  2. We can store a HashMap alongside the Vec in ReplicatedClients mapping client id -> index in Vec. We assume that connecting and disconnecting is relatively rare. Adding is trivial. Removal already has enough overhead making an update to the HashMap negligible.
  3. We can use the indexmap crate. It provides a HashMap with Vec performance iteration.
  4. We can keep order of clients in tact allowing to search using binary search.

I think the least intrusive and easiest to implement is probably 4. We keep the data structure in tact and just change only the methods add, remove & get. What do you think?

@UkoeHB
Copy link
Collaborator

UkoeHB commented Sep 13, 2024

What about players moving frequently between sector borders.

It's hard to know unless you test it. The various options are easy to implement, I recommend testing to find what works best. You should also test with a fork of bevy_replicon that provides O(1) lookup for clients to see if it makes a difference.

@Shatur
Copy link
Contributor

Shatur commented Sep 13, 2024

We can store a HashMap alongside the Vec in ReplicatedClients mapping client id -> index in Vec. We assume that connecting and disconnecting is relatively rare. Adding is trivial. Removal already has enough overhead making an update to the HashMap negligible.

I would go with this one. It's as simple as 4, but more performant for lookups and doesn't require a separate crate.

@Shatur
Copy link
Contributor

Shatur commented Sep 13, 2024

@phisn could you submit a PR?

@phisn
Copy link
Author

phisn commented Sep 17, 2024

Will implement it in the next month.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants