-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Per Component Visibility #304
Comments
Thanks, edited with my recent thoughts I added in Discord :) |
Hmm I think the most flexible API would be:
For implementation, we need to be careful about whitelist vs blacklist, and what happens if you do |
So you suggesting to have per-component toggle instead of per-group? Like no "Owner", "Party member", just set What do you think about something like this? 2 policies: client_visibility.visibility_mut(entity) |= GUILD_MEMBER;
client_visibility.visibility_mut(entity) ^= PARTY_MEMBER; And by default we define |
Honestly I'm confused how this would work. You have a mask on the entity, a mask on the client, and a mask registered per component..? |
No, no, in the snipped above you don't set a mask on a client. Only on component and on entity. Not entirely sold on this idea, just thinking out loud. |
Isn't the goal to have different components visible to different clients? So wouldn't clients need some associated info to do that filtering? |
You configure masks for entities inside |
Ok that makes sense, basically setting the client's visibility permissions per-entity. And then a component group-based lookup to get permission requirements when replicating an entity's contents. Also, I think it's fine to continue replicating empty entities if a client has visibility permissions for an entity that don't intersect with any of the entity's components. |
Yes, and the additional lookup should be cheap since it could be index-based.
You are right! Then we should have component and entity visibility separate. Like you suggested in #304 (comment), but with groups. |
Let's summarize. Short descriptionComponent visibility will be separate from entity visibility and implemented in the form of groups. After registering a component, the user can assign a visibility mask to it like this (via an extension trait for AppVisibilityExt::set_visibility_mask::<C: Component>(mask: u32) { // ... } Usage example: const GUILD_MEMBER: u32 = 0b1;
app.set_visibility_mask::<C>(GUILD_MEMBER); This means that By default, all components are visible to all replicated entities (i.e., all entities have a default mask of all 1's). However, the user will be able to override it: ClientVisibility::set_component_visibility(entity: Entity, mask: u32) { } // Set groups for specific entity
ClientVisibility::set_default_component_visibility(mask: u32); // Override the default all 1's. Usage example: client_visibility.set_component_visibility(entity, GUILD_MEMBER); If an entity is considered visible but all its components are hidden, an empty entity will be replicated. If an entity is hidden, it won't be replicated even if all its components are visible. Therefore, entity visibility takes priority. ImplementationAdd extension trait with a resourceTo store component groups, we need to introduce a separate resource that will be used by the aforementioned Adjust
|
pub(super) fn update(&mut self, world: &World, rules: &ReplicationRules) { |
Here is where the caching is done:
bevy_replicon/src/server/replicated_archetypes.rs
Lines 83 to 87 in cabebab
replicated_archetype.components.push(ReplicatedComponent { | |
component_id: fns_info.component_id(), | |
storage_type, | |
fns_id: fns_info.fns_id(), | |
}); |
The struct that will have the additional
visibility_group
field:bevy_replicon/src/server/replicated_archetypes.rs
Lines 124 to 128 in cabebab
pub(super) struct ReplicatedComponent { | |
pub(super) component_id: ComponentId, | |
pub(super) storage_type: StorageType, | |
pub(super) fns_id: FnsId, | |
} |
Changes replication
After adding the necessary API to ClientVisibility
, we will need to add an additional check to see if a component is visible here, in addition to the entity check:
Lines 375 to 378 in cabebab
let visibility = client.visibility().cached_visibility(); | |
if visibility == Visibility::Hidden { | |
continue; | |
} |
Then, in the same function, we need to check if any new component became visible on an entity here:
Lines 415 to 419 in cabebab
let new_entity = marker_added || visibility == Visibility::Gained; | |
if new_entity | |
|| init_message.entity_data_size() != 0 | |
|| entities_with_removals.contains(&entity.id()) | |
{ |
Removals caching
We cache removals because the information about the removal may not survive until the next network tick, and we need them grouped by entity. It will be similar to component removals: a lookup into ClientVisibility
and store the group in the map:
bevy_replicon/src/server/removal_buffer.rs
Lines 103 to 106 in dd85822
self.removals | |
.entry(entity) | |
.or_insert_with(|| self.ids_buffer.pop().unwrap_or_default()) | |
.insert(component_id); |
Removals replication
This will require an additional check here to ensure that the entity is visible to a client:
Lines 508 to 517 in dd85822
for (entity, remove_ids) in removal_buffer.iter() { | |
for (message, _) in messages.iter_mut() { | |
message.start_entity_data(entity); | |
for fns_info in remove_ids { | |
message.write_fns_id(fns_info.fns_id())?; | |
} | |
entities_with_removals.insert(entity); | |
message.end_entity_data(false)?; | |
} | |
} |
Final thoughts
I think it's pretty complicated for a beginner, so I think it would be best for me to implement it :D I'm quite busy right now, but I will put it on my TODO list.
If anyone want to try to implement it - I can't stop you and will definitely help or answer any questions about the implementation.
LGTM, will require a lot of careful testing as usual. |
I came up with a way to support this in Currently each client can be assigned 'attributes' which are just tags. Then entities an be given 'visibility conditions' which evaluate against client attributes to determine visibility. To support component-level visibility granularity, we can allow users to assign multiple visibility conditions to an entity for different component masks. Then to compute the aggregate mask, evaluate all visibility conditions against the client attributes and XOR the masks assigned to ones that evaluate true. For example, an entity might have |
Sounds great! |
I'd like the ability to not only be able to define which entities are visible to specific clients (possible with the bevy_replicon::server::VisibilityPolicy) but also be able to define what components are visible for each client.
An example: A client can see the stats of his own troops but not the ones of his enemies.
@Shatur and I had a discussion on bevy discord in the the bevy_replicon ecosystem-crates channel.
Here is a little summary from what Shatur wrote:
The API was originally suggested by @NiseVoid. The idea is to have component access levels via bitmasks like in physics engine. User define their meaning. Some examples:
To achieve this, we assign a mask to each component. Like "send this component only to owner and party members". And assign masks to client entities.
To achieve this we can turn hashset for whitelist policy into a hashmap and for blacklist policy add additional hashmap. Both hashmaps will map entity to its mask for a client. We also need to store last processed value into order to detect changes. So the map will be entity -> (mask, mask).
Should this API be per component or per component group (i.e. per replication rule)?
@UkoeHB and @NiseVoid what do you think of this proposal?
The text was updated successfully, but these errors were encountered: