-
Notifications
You must be signed in to change notification settings - Fork 998
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Uncouple BeaconBlockAndBlobsSidecarByRoot #3139
Conversation
I don't think we should go back to uncoupling and dealing with the race condition to resolve this edge case. The con seems to outweigh the pro. As for the edge case, we could use |
Can you link to relevant discussion on why couple if it was originally de-coupled? If you block until both requests are completed there should not be race conditions. |
https://notes.ethereum.org/RLOGb1hYQ0aWt3hcVgzhgQ?
That's a decent divergence from current behavior (i.e. one request vs two requests) with additional complexities such as peer rotations. Curious what other teams think @realbigsean @tbenr |
Discussion for by range is here #3087 I think discussion for by root was in Discord and to be honest it was much more brief. I think we assumed this API was more straightforward.
This is how we would handle it in lighthouse if the requests were decoupled. I think my view here is about the same as with by range requests. It'd be nice for the APIs to reflect this internal coupling if all clients are planning to do similar. Are you planning to do differently in lodestar? Internally we're actually using |
On the networking side, I'm experimenting on the "Union" blockByRoot. Even if technically possible, the implementation turned to be more complex than expected, still experimenting. Internally I approached the fully coupled version. I ended up by implementing a sort of "Union" but there are several interference and too many code changes that might be reverted if we decouple back in the future. So I'll also investigate a fully decoupled processing. |
It sounded on the call like people preferred an error code to resolve the edge cases? This seems reasonable to me. I think we should keep this coupled for devnet v3 but I think it's possible we'll have to decouple this if we observe big blocks on mainnet leading to significant block latencies because this would mean we may want to decouple gossip (as much as I wouldn't want to) in order to optimize there. @AgeManning has been looking at how message size and latency relate on mainnet gossip today and seeing something like 50kb compressed blocks sizes have around 200-400ms of delay on gossip vs a 500b aggregate and proof has <50ms (correct me if I'm wrong, Age). |
These delays are per each hop? |
The delays are looking over roughly a slot. It's a measure of how delayed we are seeing duplicates on gossipsub for an individual message. For the block topic, this doesn't measure the delay in the block arriving (I think most clients have this metric), but once we receive a block, how long does it take our other peers to deliver us the same block over the mesh. I guess its closer to a measure of the variance of message propagation on gossipsub. The other interesting metric, that is highly correlated with this, is the % of messages on a topic that we recieve an IHAVE for before we see it on the mesh. This percentage is much higher on the block topic (because of the message size). i.e if it takes longer than the heartbeat time to send a message to all your mesh peers, you might just send the IHAVE msg-id before-hand. |
This seems reasonable -- anyone want to take a stab at a PR? @realbigsean @dapplion ? |
In general, I think coupling goes against the grain of what the sidecar proposal is about - while coupling in general might work for the 80% case we will keep running into issues in the margin where coupling is not in sympathy with the underlying data structure - this will keep repeating in various contexts: we're serializing something that is inherently parallelizable. It might be the case that the first "rough cut" version of the feature will be implemented mostly-coupled internally, but when we have to process more and more data, we will want to remove inefficiencies and coupling introduces such an artificial inefficiency that fundamentally does not need to be there. +1 for any uncoupling, at all levels. |
Chiming in to agree with @arnetheduck having a single bundled block and sidecar results in a heavy object to pass around the network and the assumptions that will inevitably be made by having this object will result in it being much harder to separate block and sidecar in future (e.g. propagating over something other than the existing p2p), which seems totally at odds with the idea of having a sidecar in the first place. |
095afd6
to
f61fdc2
Compare
closing in favor of #3244 |
Clients must still not prune un-finalized blobs, else they can't import a block older than
MIN_EPOCHS_FOR_BLOBS_SIDECARS_REQUESTS
. So it's not unreasonable to couple this two methods, but the coupling still has side effects.Practically clients can couple on their implementation resulting on quasi-identical once EIP-4844 is finalized.