-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support "Tracing" / Spans #9415
Comments
The way we get spans in InfludDb is to use metrics which has the start/stop times as well as counters of what rows have been produced, time the execution started, stopped, and various other things The DataFusion metrics --> jarger span exporter for InfluxDB 3.0 can be found here https://github.com/influxdata/influxdb/blob/8fec1d636e82720389d06355d93245a06a8d90ad/iox_query/src/exec/query_tracing.rs#L93-L112 It uses our own span generation system as the rust ecosystem didn't have a standard library that we could find when we originally wrote it. One possibility might be to refactor the code out of |
The I think we could get most of the way there by implementing the following changes:
With these changes, all the data of the existing
for compatibility with code that is reading metrics from the existing Some benefits compared to the current metrics implementation:
Downsides:
|
Another downside of But otherwise the high level idea seems plausble, if a bunch of work. Maybe a quick POC could be done to see what it might look like in practice / how much work / change was required |
I think this is where custom
Agreed. What would be a good scope for a POC to be both quick to implement and broad enough to cover all related cases? Maybe a subset of operators/streams that are used by a specific, but non-trivial, query. The existing metrics code can be left as-is until we reach a point in implementation where we're confident that tracing can replace those functionalities. Also a lot of it will continue to be used I think for generating DataFusion-native metrics. |
After spending some time with the optimizer, I think it would be a good candidate to PoC the
|
|
Hi all, thanks for adding this and investigating the tracing crate. I'd like to suggest being a bit more specific about the goals of adding tracing, before jumping in both feet :). Maybe I can pitch in some use cases to help with this. My team is prototyping a distributed engine on top of ballista. Since ballista doesn't yet have a great UI, we started to look at adding some end-to-end tracing (think external client -> flight SQL query -> scheduler -> enqueue job -> executors -> DF engine). As we realised there is currently no tracing in either project, we quickly found this issue. I think the tracing crate, together with some of the community subscribers (e.g. opentelemetry stack) can solve this problem, even though there are a number of challenges:
To that end, I'd like to understand if reimplementing metrics on top of tracing is really what this issue is about, or just an attempt at consolidating some of the timing / metrics bookkeeping? Based on my experience with other systems (mostly on the JVM, building and tuning Spark / Kafka deployments), tracing and metrics work really well together, but they are rarely conflated. My suggestion would be to decouple adding tracing (as a tool for people that are monitoring / optimizing engines built on top of DF) from the core metrics refactoring. Lastly, if there is not a lot of work started here, I've already started to play around with some of the suggestions on this thread (add instrument to execute, instrument streams and async blocks, etc) and I'd be interested in contributing to this track, especially some of the lessons learned around tracing async code and streams. |
Since the the internal metrics in datafusion -- aka https://docs.rs/datafusion/latest/datafusion/physical_plan/metrics/index.html have start/stop timestamps on them already, we have found it relatively straightforward to convert them to "tracing" spans -- the link to do so is above: I am not clear what additional benefit more direct tracing integration in datafusion would provide, but I may be missing something |
Hi, Using internal metrics to generate tracing spans on query completion is a great solution. However it prevents tracing downstream data sources, which are often more complex than an on-disk Parquet read. I've been experimenting with tracing a full execution plan by injecting a custom "TracingExec" node before every execution node in the plan. Its The huge advantage of this method is that it avoids needing to modify every execution node's However, there is still a major blocking point in the current DataFusion code: spawning tasks in new threads. Whenever a task is spawned in a new thread (eg, inside a Even without spans, it would be useful to be able to link logs generated deep in the plan (by a custom I'd like to propose a very simple change first, before going full For instance, in #[cfg(feature = "tracing")]
use tracing_futures::Instrument;
/// Spawn task that will be aborted if this builder (or the stream
/// built from it) are dropped
pub fn spawn<F>(&mut self, task: F)
where
F: Future<Output = Result<()>>,
F: Send + 'static,
{
#[cfg(feature = "tracing")]
let task = task.in_current_span();
self.join_set.spawn(task);
} |
I agree with this change. When running instrumentation on top of datafusion this would make it possible to extract more information. |
The The main example I can think of is the exact timing of entering/exiting execution of the async tasks. The datafusion metrics record a "start" and "end" timestamp for the whole operator, but they do not record when operators await and give up control to the executor. The tracing API allows for this because a span can be entered and exited multiple times before it is finally closed. This allows you to graph out exactly when async tasks are running in relation to each and for how long, which can be helpful for identifying bottlenecks where a task is waiting a long time for data from another task. You can kinda use the |
I've opened #14547 to launch the discussion. PR was simpler than expected as all new tasks spawned do so in the context of a tokio JoinSet, so I've just wrapped the tokio JoinSet in an implementation which adds |
Is your feature request related to a problem or challenge?
"Tracing" is a visualization technique for understanding how potentially concurrent operations happen, and is common in distributed systems.
You can also visualize DataFusion executions using traces. Here is an example trace we have at InfluxData that is integrated into the rest of our system and visualized using https://www.jaegertracing.io/
This visualization shows when each operator started and stopped (and the operators are annotated with how much CPU time is spent, etc). These spans are integrated into the overall trace of a request through our cloud service, which allows us to understand where a request's time is spent, both across services as well as within our DataFusion based engine.
For more background on tracing, this blog seems to give a reasonable overview: https://signoz.io/blog/distributed-tracing-span/
Describe the solution you'd like
I would like to make it easy for people to add DataFusion ExecutionPlan level tracing to their systems as well.
Given the various different libraries to generate traces I don't think picking any particular one to build into DataFusion is a good idea. However adding some way to walk the ExecutionPlan metrics and emit information that can be turned into traces would be very helpful I think
This came up twice recently, so I wanted to get it filed into a ticket
Describe alternatives you've considered
No response
Additional context
@simonvandel noted in Discord
It also came up in slack: https://the-asf.slack.com/archives/C04RJ0C85UZ/p1709051125059619
The text was updated successfully, but these errors were encountered: