[mq pallet] Custom next queue selectors #6059
Open
+393
−7
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Changes:
force_set_head
function from theMessageQueue
pallet via a new trait:ForceSetHead
. This can be used to force the MQ pallet to process this queue next.Context
For the Asset Hub Migration (AHM) we need a mechanism to prioritize the inbound upward messages and the inbound downward messages on the AH. To achieve this, a minimal (and no breaking) change is done to the MQ pallet in the form of adding the
force_set_head
function.An example use of how to achieve prioritization is then demonstrated in
integration_test.rs::AhmPrioritizer
. Normally, all queues are scheduled round-robin like this:| Relay | Para(1) | Para(2) | ... | Relay | ...
The prioritizer listens to changes to its queue and triggers if either:
n
blocks (to prevent starvation if there are too many other queues)In either situation, it schedules the queue for a streak of three consecutive blocks, such that it would become:
| Relay | Relay | Relay | Para(1) | Para(2) | ... | Relay | Relay | Relay | ...
It basically transforms the round-robin into an elongated round robin. Although different strategies can be injected into the pallet at runtime, this one seems to strike a good balance between general service level and prioritization.