-
-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Help with size based eviction that weighs with bytes #201
Comments
Hi.
Yes, it is the right idea. Overhead per EntryThe overhead per cache entry will be the followings on 64-bit platforms:
*1: Write-order queue is enabled if one or both of the following
*2: Overhead becomes bigger than the numbers listed here if the crate feature Overhead of The Whole CacheThe LFU Filter (CountMin Sketch)When the cache becomes half-full, the LFU filter (aka CountMin Sketch) will be created for the cache. The size of the filter is decided by an estimated max number of entries in the cache. let estimated_max_num_entries: u64 = ((current_num_entries as f64
* (current_total_weighted_size as f64 / max_capacity as f64))
as u64)
.max(128); // minimum 128 entries. The byte size of the LFU filter can be calculated by the following code: // maximum 8GiB on 64-bit platform.
let byte_size = 8 *
estimated_max_num_entries
.min(2u64.pow(30))
.next_power_of_two(); Here are some examples:
EDIT: Fixed the formula for the LFU filter. Forgot to multiply by 8 as the internal table is |
If I understand the internal data structure of use serde_json::Value;
fn size_of_json_val(json: &Value) -> usize {
use serde_json::Value::*;
use std::mem::size_of;
let v_size = size_of::<Value>();
match json {
Null | Bool(_) | Number(_) => v_size,
String(str) => v_size + str.capacity(),
Array(vec) => {
v_size
+ vec.iter().map(size_of_json_val).sum::<usize>()
+ v_size * (vec.capacity() - vec.len())
}
Object(map) => {
let s_size = size_of::<std::string::String>();
v_size
+ map
.iter()
.map(|(k, v)| s_size + k.capacity() + size_of_json_val(v))
.sum::<usize>()
// Can't do the following because `serde_json::Map` does not have `capacity` method.
// + (s_size + v_size) * (map.capacity() - map.len())
}
}
} EDIT: Fixed the formula for |
For my project, I am wanting to add some cache size limits that are set automatically based on system memory. Are there any examples of someone doing this?
I'm not sure how to accurately calculate the size in bytes of adding an item. It's going to be something like
size_of(key) + size_of(value) + overhead
. What is the overhead? And is this the right idea?Also my Cache's Values are json_serde::Value and I'm not sure how to calculate their size in memory. Maybe that's a better question for the serde issue tracker though.
I know that checking the size of things on the heap isn't cheap, and that the weigher is more designed for unit-less weights. But I'm just wanting to keep my programming from OOMing if the cache is full of all large values.
The text was updated successfully, but these errors were encountered: