Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help with size based eviction that weighs with bytes #201

Open
BlinkyStitt opened this issue Nov 16, 2022 · 2 comments
Open

Help with size based eviction that weighs with bytes #201

BlinkyStitt opened this issue Nov 16, 2022 · 2 comments
Assignees
Labels
question Further information is requested

Comments

@BlinkyStitt
Copy link

BlinkyStitt commented Nov 16, 2022

For my project, I am wanting to add some cache size limits that are set automatically based on system memory. Are there any examples of someone doing this?

I'm not sure how to accurately calculate the size in bytes of adding an item. It's going to be something like size_of(key) + size_of(value) + overhead. What is the overhead? And is this the right idea?

Also my Cache's Values are json_serde::Value and I'm not sure how to calculate their size in memory. Maybe that's a better question for the serde issue tracker though.

I know that checking the size of things on the heap isn't cheap, and that the weigher is more designed for unit-less weights. But I'm just wanting to keep my programming from OOMing if the cache is full of all large values.

@tatsuya6502 tatsuya6502 self-assigned this Nov 17, 2022
@tatsuya6502 tatsuya6502 added the question Further information is requested label Nov 17, 2022
@tatsuya6502
Copy link
Member

tatsuya6502 commented Nov 17, 2022

Hi.

It's going to be something like size_of(key) + size_of(value) + overhead. What is the overhead? And is this the right idea?

Yes, it is the right idea.

Overhead per Entry

The overhead per cache entry will be the followings on 64-bit platforms:

Write-order queue? Overhead (bytes per entry) *2
Disabled 152
Enabled (*1) 184

*1: Write-order queue is enabled if one or both of the following CacheBuilder method was/were called:

  • time_to_live
  • support_invalidation_closures

*2: Overhead becomes bigger than the numbers listed here if the crate feature quanta and/or atomic64 is disabled. (They are enabled by default)

moka-v0 7 2-memory-overhead-per-entry

Overhead of The Whole Cache

The LFU Filter (CountMin Sketch)

When the cache becomes half-full, the LFU filter (aka CountMin Sketch) will be created for the cache.

The size of the filter is decided by an estimated max number of entries in the cache. Cache uses the following code to estimate:

let estimated_max_num_entries: u64 = ((current_num_entries as f64
    * (current_total_weighted_size as f64 / max_capacity as f64))
    as u64)
    .max(128);  // minimum 128 entries.

The byte size of the LFU filter can be calculated by the following code:

// maximum 8GiB on 64-bit platform.
let byte_size = 8 *
    estimated_max_num_entries
        .min(2u64.pow(30))
        .next_power_of_two();

Here are some examples:

Estimated max number of entries Byte Size MiB
1 million 8,388,624 8MiB
5 millions 67,108,880 64MiB
10 millions 134,217,744 128MiB

EDIT: Fixed the formula for the LFU filter. Forgot to multiply by 8 as the internal table is Box<[u64]>, not Box<[u8]>.

@tatsuya6502
Copy link
Member

tatsuya6502 commented Nov 17, 2022

Also my Cache's Values are json_serde::Value and I'm not sure how to calculate their size in memory. Maybe that's a better question for the serde issue tracker though.

If I understand the internal data structure of serde_json::Value correctly, the following function can calculate the byte size of a value. You may want to ask serde_json maintainers if this function makes sense.

use serde_json::Value;

fn size_of_json_val(json: &Value) -> usize {
    use serde_json::Value::*;
    use std::mem::size_of;

    let v_size = size_of::<Value>();
    match json {
        Null | Bool(_) | Number(_) => v_size,
        String(str) => v_size + str.capacity(),
        Array(vec) => {
            v_size
                + vec.iter().map(size_of_json_val).sum::<usize>()
                + v_size * (vec.capacity() - vec.len())
        }
        Object(map) => {
            let s_size = size_of::<std::string::String>();
            v_size
                + map
                    .iter()
                    .map(|(k, v)| s_size + k.capacity() + size_of_json_val(v))
                    .sum::<usize>()
            // Can't do the following because `serde_json::Map` does not have `capacity` method.
            // + (s_size + v_size) * (map.capacity() - map.len())
        }
    }
}

EDIT: Fixed the formula for Object(map) by adding s_size.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants