Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

implement rpc method that fetches fee per mass based on block template #503

Closed
wants to merge 17 commits into from

Conversation

biryukovmaxim
Copy link
Collaborator

handler builds block template and calculates max/min/median values
closes #501

@aspect
Copy link
Collaborator

aspect commented Jul 4, 2024

This API call should track the last request timestamp and return a cached value if this timestamp is within some threshold range (like 1 second). Due to the potential high frequency of this call invocation (easily a few thousand requests per second), we want to save compute and return the last calculated value.

@biryukovmaxim biryukovmaxim marked this pull request as ready for review July 7, 2024 12:05
oneof fee_per_mass {
Virtual virtual = 2; // 1 is reserved for `All`
}
uint64 mempool_total_mass = 11;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would be the usage of total mass from the client side? I'm not sure we want to commit ourselves to such an API. I have another suggestion for a more abstracted response

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

based on this value we can track both: absolute values/relative to bps and block limit. that's more flexible

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wdyt @aspect?

rpc/service/src/service.rs Outdated Show resolved Hide resolved

#[derive(Debug, Default)]
struct EstimatedFeeCache {
min: portable_atomic::AtomicF64,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain the reasoning behind using multiple float atomics rather than an atomic arc holding the final response object? (also - adding another lib dep is another negative point for using atomic floats)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain the reasoning behind using multiple float atomics rather than an atomic arc holding the final response object? (also - adding another lib dep is another negative point for using atomic floats)

  1. arc will allocate it on heap
    2.1) that's not a new dep in codebase, it's new to rpc crate
    2.2) ofc I can make it u64 and convert it to/from float by from_bits, to_bits. but what is the reason to have boilerplate code like that in our codebase when portable-atomic already has it

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think such an heap allocation (very small, once per second) is a real consideration. And imho it will simplify the code

Copy link
Collaborator

@coderofstuff coderofstuff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested via wrpc and looks to be working fine

Not a blocker, but I think needs to be thought about in terms of dev UX: The term used here is "Priority" fee, but the actual calculation is done over real fees in the mempool.

This can be confusing because:

  • when the mempool is empty, all the values are zeros - which makes sense, there is nothing to add on top of your regular tx fees to make it be included immediately
  • when the mempool is congested, the values will be non-zeros. as a developer using an api that says "Priority" fee, this can be taken as fee I have to pay on top of the regular tx fees.

wasm/build/docs/typedoc.json Show resolved Hide resolved
rpc/grpc/core/proto/messages.proto Outdated Show resolved Hide resolved
rpc/grpc/core/proto/messages.proto Outdated Show resolved Hide resolved
#[derive(Clone, Copy, Debug, Default, Serialize, Deserialize, BorshSerialize, BorshDeserialize)]
#[serde(rename_all = "camelCase")]
pub struct VirtualFeePerMass {
pub max: f64,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do these still represent any kind of SLA? If so, please add comments as to what these would mean in terms of time-to-inclusion

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I renamed method.
this value represents max value fee/mass in the current mempool.
I will add some description from internal discussion:

What I came to this:
When the network is overloaded, we need to know, and it will have meaning like:
min from bbt - there's a chance my tx will be included
Something in the middle - likely tx will be included
Max - very likely

In case of normal load:
Max - it's almost included, check Ur balance 
Middle - very likely 
Min - likely 
ComputeFeeOnly - there's a chance 

For that we only  need fee_per_mass based on bbt - already in pr

Flag if mempool is overloaded - I want to calculate  and expose  mempool_total_mass.

mempool_total_mass/(block_mass_limit*bps) let's call it mempool load factor - Mlf.

If Mlf < 1 (mempool is underload) then even tx with compute fee only will be included

Mlf in [1,1.5) - normal mempool load
Mlf > 1.5 - overload

@biryukovmaxim
Copy link
Collaborator Author

The term used here is "Priority" fee, but the actual calculation is done over real fees in the mempool.

I renamed the method name. should be more clear now

Copy link
Collaborator

@coderofstuff coderofstuff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My initial comments have been addressed. I did some basic sanity tests and RPC api appears to be working:

$ connect 127.0.0.1
$ rpc get-fee-info
Ok(
    GetFeeInfoResponse {
        fee_per_mass: VirtualFeePerMass(
            VirtualFeePerMass {
                max: 0.0,
                median: 0.0,
                min: 0.0,
            },
        ),
        mempool_total_mass: 0,
    },
)

And I get non-zero values when mempool has entries (like in TN11):

$ rpc get-fee-info
Ok(
    GetFeeInfoResponse {
        fee_per_mass: VirtualFeePerMass(
            VirtualFeePerMass {
                max: 6.21922511034821,
                median: 6.21922511034821,
                min: 6.21922511034821,
            },
        ),
        mempool_total_mass: 34663,
    },
)

I have a few more questions related to dev UX:

  1. How does a developer use these figures? Create their TX, calculate its mass and then calculate their fee by multiplying the mass and the min, mid, max figures they get here?
  2. Is the figure returned in KAS/mass or sompi/mass? If it's in KAS, would it be better to return it in sompi and then truncate to integers? Reason being is that the eventual actual fee will be in integers, not floating numbers.
    • If we must use floats, we need to communicate clearly to the API user that they should truncate their decimals when presenting fees (or if they present fees in kas, then use only up to 8 decimal places).

@biryukovmaxim
Copy link
Collaborator Author

  1. How does a developer use these figures? Create their TX, calculate its mass and then calculate their fee by multiplying the mass and the min, mid, max figures they get here?
  1. mempool total mass represents network load. Based on it dev will pick min/mid/max/0 value. Multiply it by tx mass
  2. it's in sompi. Could be micro sompi if we want to not use floats or ratio: numerator and denominator

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Implement Dynamic Network fee estimation
4 participants