You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Compactors require high memory as traces combine and grow in the backend. This opens the possibility of crafting a long running trace with low spans-per-second that can grow and eventually OOM compactors.
To Reproduce
Steps to reproduce the behavior:
Start Tempo (SHA or version): all versions till e5f7ded
Perform Operations (Read/Write/Others): Carefully craft super-long running traces with a few spans every second that will eventually be combined by the compactors into a MEGA trace (the largest we are seeing so far are 1.3GB)
a MEGA trace (the largest we are seeing so far are 1.3GB)
For both possibilities listed, a trace of this size would still be trouble for the queriers. I.e. if the compactor is able to write multiple splits of the trace (or ignore the block entirely), the expected behavior is still the querier recombines all segments. Maybe a solution for the querier is limit the amount of data returned in a single call, add a new paged API to retrieve all the splits. Quick ideas on the limits per call, 100MiB? Even a 100MiB trace is quite large and hard to utilize.
Also, I propose to call any trace over 1GB a GIGA trace :)
Describe the bug
Compactors require high memory as traces combine and grow in the backend. This opens the possibility of crafting a long running trace with low spans-per-second that can grow and eventually OOM compactors.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Compactors do not keep OOMing.
Environment:
Additional Context
Some possibilities considered:
The text was updated successfully, but these errors were encountered: