HOT stands for Hot Object Tracker.
A feature-complete and blazing-fast caching library for Go.
- π Fast, concurrent
- π« Generics
- ποΈ Eviction policies: LRU, LFU, 2Q
- β° TTL with jitter
- π Stale while revalidation
- β Missing key caching
- π Sharded cache
- π Optional locking
- π Chain of data loaders with in-flight deduplication
- πΆοΈ Cache warmup
- π¦ Batching all the way
- 𧩠Composable caching strategy
- π Optional copy on read and/or write
- π Stat collection
go get github.com/samber/hot
This library is v0 and follows SemVer strictly.
Some breaking changes might be made to exported APIs before v1.0.0.
GoDoc: https://godoc.org/github.com/samber/hot
import "github.com/samber/hot"
// Available eviction policies: hot.LRU, hot.LFU, hot.TwoQueue, hot.ARC
// Capacity: 100k keys/values
cache := hot.NewHotCache[string, int](hot.LRU, 100_000).
Build()
cache.Set("hello", 42)
cache.SetMany(map[string]int{"foo": 1, "bar": 2})
values, missing := cache.GetMany([]string{"bar", "baz", "hello"})
// values: {"bar": 2, "hello": 42}
// missing: ["baz"]
value, found, _ := cache.Get("foo")
// value: 1
// found: true
If a value is not available in the in-memory cache, it will be fetched from a database or any data source.
Concurrent calls to loaders are deduplicated by key.
import "github.com/samber/hot"
cache := hot.NewHotCache[string, *User](hot.LRU, 100_000).
WithLoaders(func(keys []string) (found map[string]*User, err error) {
rows, err := db.Query("SELECT * FROM users WHERE id IN (?)", keys)
// ...
return users, err
}).
Build()
user, found, err := cache.Get("user-123")
// might fail if "user-123" is not in cache and loader returns error
// get or create
user, found, err := cache.GetWithLoaders(
"user-123",
func(keys []string) (found map[string]*User, err error) {
rows, err := db.Query("SELECT * FROM users WHERE id IN (?)", keys)
// ...
return users, err
},
func(keys []string) (found map[string]*User, err error) {
rows, err := db.Query("INSERT INTO users (id, email) VALUES (?, ?)", id, email)
// ...
return users, err
},
)
// either `err` is not nil, or `found` is true
// missing value vs nil value
user, found, err := cache.GetWithLoaders(
"user-123",
func(keys []string) (found map[string]*User, err error) {
// value could not be found
return map[string]*User{}, nil
// or
// value exists but is nil
return map[string]*User{"user-123": nil}, nil
},
)
import "github.com/samber/hot"
cache := hot.NewHotCache[string, int](hot.LRU, 100_000).
WithTTL(1 * time.Minute). // items will expire after 1 minute
WithJitter(2, 30*time.Second). // optional: randomizes the TTL with an exponential distribution in the range [0, +30s)
WithJanitor(1 * time.Minute). // optional: background job will purge expired keys every minutes
Build()
cache.SetWithTTL("foo", 42, 10*time.Second) // shorter TTL for "foo" key
With cache revalidation:
loader := func(keys []string) (found map[string]*User, err error) {
rows, err := db.Query("SELECT * FROM users WHERE id IN (?)", keys)
// ...
return users, err
}
cache := hot.NewHotCache[string, *User](hot.LRU, 100_000).
WithTTL(1 * time.Minute).
// Keep delivering cache 5 more second, but refresh value in background.
// Keys that are not fetched during the interval will be dropped anyway.
// A timeout or error in loader will drop keys.
WithRevalidation(5 * time.Second, loader).
// On revalidation error, the cache entries are either kept or dropped.
// Optional (default: drop)
WithRevalidationErrorPolicy(hot.KeepOnError).
Build()
If WithRevalidation is used without loaders, the one provided in WithRevalidation()
or GetWithLoaders()
is used.
hot.NewHotCache[K, V](algorithm hot.EvictionAlgorithm, capacity int).
// Enables cache of missing keys. The missing cache is shared with the main cache.
WithMissingSharedCache().
// Enables cache of missing keys. The missing keys are stored in a separate cache.
WithMissingCache(algorithm hot.EvictionAlgorithm, capacity int).
// Sets the time-to-live for cache entries
WithTTL(ttl time.Duration).
// Sets the time after which the cache entry is considered stale and needs to be revalidated
// * keys that are not fetched during the interval will be dropped anyway
// * a timeout or error in loader will drop keys.
// If no revalidation loader is added, the default loaders or the one used in GetWithLoaders() are used.
WithRevalidation(stale time.Duration, loaders ...hot.Loader[K, V]).
// Sets the policy to apply when a revalidation loader returns an error.
// By default, the key is dropped from the cache.
WithRevalidationErrorPolicy(policy revalidationErrorPolicy).
// Randomizes the TTL with an exponential distribution in the range [0, +upperBoundDuration).
WithJitter(lambda float64, upperBoundDuration time.Duration).
// Enables cache sharding.
WithSharding(nbr uint64, fn sharded.Hasher[K]).
// Preloads the cache with the provided data.
WithWarmUp(fn func() (map[K]V, []K, error)).
// Disables mutex for the cache and improves internal performances.
WithoutLocking().
// Enables the cache janitor.
WithJanitor().
// Sets the chain of loaders to use for cache misses.
WithLoaders(loaders ...hot.Loader[K, V]).
// Sets the callback to be called when an entry is evicted from the cache.
// The callback is called synchronously and might block the cache operations if it is slow.
// This implementation choice is subject to change. Please open an issue to discuss.
WithEvictionCallback(hook func(key K, value V)).
// Sets the function to copy the value on read.
WithCopyOnRead(copyOnRead func(V) V).
// Sets the function to copy the value on write.
WithCopyOnWrite(copyOnWrite func(V) V).
// Returns a HotCache[K, V].
Build()
Available eviction algorithm:
hot.LRU
hot.LFU
hot.TwoQueue
hot.ARC
Loaders:
func loader(keys []K) (found map[K]V, err error) {
// ...
}
Shard partitioner:
func hash(key K) uint64 {
// ...
}
This project has been split into multiple layers to respect the separation of concern.
Each cache layer implements the pkg/base.InMemoryCache[K, V]
interface. Combining multiple encapsulation has a small cost (~1ns per call), but offers great customization.
We highly recommend using hot.HotCache[K, V]
instead of lower layers.
This project provides multiple eviction policies. Each implements the pkg/base.InMemoryCache[K, V]
interface.
They are not protected against concurrent access. If safety is required, encapsulate it into pkg/safe.SafeInMemoryCache[K comparable, V any]
.
Packages:
pkg/lru
pkg/lfu
pkg/twoqueue
pkg/arc
Example:
cache := lru.NewLRUCache[string, *User](100_000)
The hot.HotCache[K, V]
offers protection against concurrent access by default. But in some cases, unnecessary locking might just slow down a program.
Low-level cache layers are not protected by default. Use the following encapsulation to bring safety:
import (
"github.com/samber/hot/pkg/lfu"
"github.com/samber/hot/pkg/safe"
)
cache := safe.NewSafeInMemoryCache(
lru.NewLRUCache[string, *User](100_000),
)
A sharded cache might be useful in two scenarios:
- highly concurrent application slowed down by cache locking -> 1 lock per shard instead of 1 global lock
- highly parallel application with no concurrency -> no lock
The sharding key must not be too costly to compute and must offer a nice balance between shards. The hashing function must have func(k K) uint64
signature.
A sharded cache can be created via hot.HotCache[K, V]
or using a low-level layer:
import (
"hash/fnv"
"github.com/samber/hot/pkg/lfu"
"github.com/samber/hot/pkg/safe"
"github.com/samber/hot/pkg/sharded"
)
cache := sharded.NewShardedInMemoryCache(
1_000, // number of shards
func() base.InMemoryCache[K, *item[V]] {
return safe.NewSafeInMemoryCache(
lru.NewLRUCache[string, *User](100_000),
)
},
func(key string) uint64 {
h := fnv.New64a()
h.Write([]byte(key))
return h.Sum64()
},
)
Instead of calling the loader chain every time an invalid key is requested, a "missing cache" can be enabled. Note that it won't protect your app against a DDoS attack with high cardinality keys.
If the missing keys are infrequent, sharing the missing cache with the main cache might be reasonable:
import "github.com/samber/hot"
cache := hot.NewHotCache[string, int](hot.LRU, 100_000).
WithMissingSharedCache().
Build()
If the missing keys are frequent, use a dedicated cache to prevent pollution of the main cache:
import "github.com/samber/hot"
cache := hot.NewHotCache[string, int](hot.LRU, 100_000).
WithMissingCache(hot.LFU, 50_000).
Build()
// TODO: copy here the benchmarks of bench/ directory
// - compare libraries
// - measure encapsulation cost
// - measure lock cost
// - measure ttl cost
// - measure size.Of cost
// - measure stats collection cost
- Ping me on Twitter @samuelberthe (DMs, mentions, whatever :))
- Fork the project
- Fix open issues or request new features
Don't hesitate ;)
# Install some dev dependencies
make tools
# Run tests
make test
# or
make watch-test
Give a βοΈ if this project helped you!
Copyright Β© 2024 Samuel Berthe.
This project is MIT licensed.