-
Notifications
You must be signed in to change notification settings - Fork 535
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generator performance #4232
Generator performance #4232
Conversation
…for all sub-series instead of during each collection
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. I appreciate the TODOs also.
@@ -129,3 +137,140 @@ func (l testLogger) Log(keyvals ...interface{}) error { | |||
l.t.Log(keyvals...) | |||
return nil | |||
} | |||
|
|||
func BenchmarkPushSpans(b *testing.B) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I love the full stack benchmark. This is something we might want to do for other components as well.
} | ||
|
||
b.StopTimer() | ||
runtime.GC() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why force a GC after the benchmark has been timed? Is this to avoid impact on later benchmarks, or to get accurate memory summary below?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case I was trying to see if the benchmarks could help measure inuse memory, so it is recording HeapInUse
at the bottom. Without the GC it just keeps growing in between runs of the benchmark (-count=5
for example). So this was an attempt to make the inuse metric more useful.
* todos * more todos and print inuse stats * Benchmark report heapinuse, ensure cleanup between benchmarks * Improve memory usage by changing histograms to precompute all labels for all sub-series instead of during each collection * changelog
What this PR does:
Metrics generator span metrics and service graphs are very popular, but memory usage can be quite high for high-cardinality setups, for example 1+ million active series in a single pod. I noticed some areas for improvement to reduce memory.
This PR contains 2 updates:
(1) on the surface it updates histograms to pre-compute all prometheus labels during creation, instead of during collection time. This way we alloc the labels once instead of on every scrape. This is possible because all of the labels for a specific histogram bucket are fixed, even external labels which are configured via runtime config.
(2) Almost more importantly in mind, it's adding a suite of benchmarks for the generator, which include WAL and non-mock registry. These benchmarks will help us identify more improvements like (1). Skimming through the module I've left several TODOs large and small of next areas to update.
Benchmarks show a large reduction in mem in Collect. Tested this in an internal cluster and it was ~15% total working set savings.
Which issue(s) this PR fixes:
Fixes #
Checklist
CHANGELOG.md
updated - the order of entries should be[CHANGE]
,[FEATURE]
,[ENHANCEMENT]
,[BUGFIX]