Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Traceql Metrics] PR 1 - Engine #3251

Merged
merged 18 commits into from
Jan 11, 2024
Merged

[Traceql Metrics] PR 1 - Engine #3251

merged 18 commits into from
Jan 11, 2024

Conversation

mdisibio
Copy link
Contributor

@mdisibio mdisibio commented Dec 19, 2023

What this PR does:
This is the first step in supporting TraceQL metrics queries {status=error} | rate() by (resource.service.name). It adds support to the language, parser, and engine for querying the two aggregates rate() and count_over_time(), and includes optional grouping by(a,b,c).

Features specifically not included:

  • This is just the raw language and engine pieces. They are not referenced by anything yet (querier, etc), that will be in followup PRs.
  • There is no customizable rate interval yet, it only uses the step from the query. This means rate() works, but not rate(5m). This is vastly simpler and the best way to get started. The step interval is easily set in the Grafana UI.

Some less-related updates:

  • Fixed the select statement to take a list of attributes/intrinsics and not generic FieldExpressions. Queries like select(1+.foo) would parse and "run", but not return anything. I believe this was an oversight, so I fixed it by making the language parsing more strict. The main driver is that I wanted to reuse the attributeList spec for both select(a,b,c) and rate/count by(a,b,c).
  • Proto linter? I think this was updated recently, so there is a lot of unrelated churn in the .proto.
  • Fixed some brittle tests

Notes
This is one entry in a set of chained PRs. The full body of work has been split into separate buckets to make reviews and updates more manageable.

  1. [Traceql Metrics] PR 1 - Engine #3251
  2. [Traceql Metrics] PR 2 - API #3252
  3. [Traceql Metrics] PR 3 - Trace ID sharding #3258

Which issue(s) this PR fixes:
n/a

Checklist

  • Tests updated
  • Documentation added
  • CHANGELOG.md updated - the order of entries should be [CHANGE], [FEATURE], [ENHANCEMENT], [BUGFIX]

@@ -23,7 +33,8 @@ type typedExpression interface {
}

type RootExpr struct {
Pipeline Pipeline
Pipeline Pipeline
MetricsPipeline metricsFirstStageElement
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having the metrics pipeline split from the spanset filter pipeline isn't ideal, but best for now. We send the eval callback of the spanset pipeline into the storage SecondPass, and the metrics pipeline has a different signature (output is time-series). Long-term this break needs to be closed and end up with one pipeline, but this current form will carry the functionality a long way. This will work until we add cross time-series arithmetic like:

  ({a} | rate()) / 
  ({b} | rate())

return e.metricsPipeline.result(), nil
}

// SpanDeduper2 is EXTREMELY LAZY. It attempts to dedupe spans for metrics
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will be an area where we need to continue experimenting. For now I've chosen the approach that I think offers the best trade-offs. We can easily increase correctness at the cost of performance.

Currently it dedupes using a hash of trace ID and span start time. The downside is that it's obviously not 100% unique, but the performance is great because it doesn't require the query to fetch any additional columns, we already load them for backend queries (trace ID will used for sharding). Deduping on trace ID/span ID is the obviously 100% correct method and where we started, but the span ID column is too hefty and significantly reduces performance. A middle-ground option would be to use the nested set span ID (integer) which is also unique but lighter-weight than span ID. However the downsides are that it's only populated for complete traces, so we'd need some fallback for partial traces (back to using start time?), and there's no good way to access it directly. We decided not to expose it through the Fetch layer.

Additionally the data structure used is important too. This current implementation is a bunch of maps of 32-bit hashes. It has good performance but increased chance of collision vs 64-bit hashes (or storing traceID + spanID in the maps directly). We use the last byte of the trace ID to perform a single layer of 1-byte sharding which reduces the pressure on any single map.

Copy link
Contributor

@zalegrala zalegrala left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a good start to me.

@mdisibio mdisibio mentioned this pull request Jan 9, 2024
3 tasks
pkg/traceql/storage.go Outdated Show resolved Hide resolved
pkg/tempopb/tempo.proto Outdated Show resolved Hide resolved
// has no chance for collisions (whereas a hash32 has a non-zero chance of
// collisions). However it means we have to arbitrarily set an upper limit on
// the maximum number of values.
type FastValues [maxGroupBys]Static
Copy link
Contributor

@stoewer stoewer Jan 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is used for by (foo, bar), right? Is the order of Static values in FastValues relevant?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct, there is a direct match between the group bys and the values in the array. by(foo,bar) will have values [ <f>, <b>, nil, nil, nil] The surrounding GroupingAggregator has knowledge of both and combines them to create the full time-series at the end: { foo="<f>", bar="<b>"}

@mdisibio mdisibio merged commit 19f6a6c into main Jan 11, 2024
15 checks passed
@mdisibio mdisibio deleted the traceql-metrics-1-engine branch January 11, 2024 15:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants