Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Tempo-distributed] Service name and Span name labels do not show in Grafana #2500

Closed
deathsaber opened this issue Jul 12, 2023 · 1 comment

Comments

@deathsaber
Copy link

deathsaber commented Jul 12, 2023

We recently migrated from tempo single-binary to tempo-distributed with S3 as the backend object storage. We are using tempo 2.1.1. With tempo-distributed, we started noticing a weird issue where label names disappear in the dropdowns after about an hour. While the labels do show up if traces are continuously being sent, when the traces stop coming in, the labels disappear after an hour. I can run searches and the traces do show up (meaning tempo has not lost data) but the labels still stay unpopulated. Please see screenshot below.

image

I looked through the query frontend logs and it shows 200 success calls for /api/search/tag but seems no data is returned.

level=debug ts=2023-07-12T13:25:03.558028555Z caller=logging.go:101 traceID=0a7ca9ae58f3e96e msg="GET /api/search/tag/service.name/values (200) 4.958505ms"
level=info ts=2023-07-12T13:25:05.921704781Z caller=handler.go:124 tenant=single-tenant method=GET traceID=577773ad8f246fcc url=/api/search/tag/name/values duration=2.615203ms response_size=2 status=200
level=debug ts=2023-07-12T13:25:05.921852173Z caller=logging.go:101 traceID=577773ad8f246fcc msg="GET /api/search/tag/name/values (200) 2.819986ms"

I came across this issue grafana/tempo#2565 which seems to be related to the problem I am facing but I am not entirely sure. Any advice and guidance will be really great on how to solve this. I have provided the tempo config values below. Is there a config that I somehow missed which might be the culprit?

    compactor:
      compaction:
        block_retention: 720h
        compacted_block_retention: 720h
        compaction_cycle: 30s
        compaction_window: 1h
        max_block_bytes: 107374182400
        max_compaction_objects: 6000000
        max_time_per_tenant: 5m
        retention_concurrency: 10
        v2_in_buffer_bytes: 5242880
        v2_out_buffer_bytes: 20971520
        v2_prefetch_traces_count: 1000
      ring:
        kvstore:
          store: memberlist
    distributor:
      log_received_spans:
        enabled: true
        filter_by_status_error: true
        include_all_attributes: true
      receivers:
        otlp:
          protocols:
            grpc:
              endpoint: 0.0.0.0:4317
            http:
              endpoint: 0.0.0.0:4318
      ring:
        kvstore:
          store: memberlist
    ingester:
      lifecycler:
        ring:
          kvstore:
            store: memberlist
          replication_factor: 1
        tokens_file_path: /var/tempo/tokens.json
    memberlist:
      abort_if_cluster_join_fails: false
      join_members:
      - tempo-distributed-gossip-ring
    multitenancy_enabled: false
    overrides:
      max_bytes_per_tag_values_query: 50000000
      metrics_generator_processors:
      - service-graphs
      - span-metrics
      per_tenant_override_config: /conf/overrides.yaml
    querier:
      frontend_worker:
        frontend_address: tempo-distributed-query-frontend-discovery:9095
      max_concurrent_queries: 20
      search:
        external_endpoints: []
        external_hedge_requests_at: 8s
        external_hedge_requests_up_to: 2
        prefer_self: 10
        query_timeout: 30s
      trace_by_id:
        query_timeout: 10s
    query_frontend:
      max_retries: 2
      search:
        concurrent_jobs: 1000
        target_bytes_per_job: 104857600
      trace_by_id:
        hedge_requests_at: 2s
        hedge_requests_up_to: 2
        query_shards: 50
    server:
      grpc_server_max_recv_msg_size: 4194304
      grpc_server_max_send_msg_size: 4194304
      http_listen_port: 3100
      http_server_read_timeout: 30s
      http_server_write_timeout: 30s
      log_format: logfmt
      log_level: debug
    storage:
      trace:
        backend: s3
        block:
          version: vParquet
        blocklist_poll: 5m
        cache: memcached
        local:
          path: /var/tempo/traces
        memcached:
          consistent_hash: true
          host: tempo-distributed-memcached
          service: memcached-client
          timeout: 500ms
        s3:
          bucket: tempo-traces-bucket
          endpoint: s3.us-east-1.amazonaws.com
          region: us-east-1
        wal:
          path: /var/tempo/wal
    usage_report:
      reporting_enabled: false
@deathsaber
Copy link
Author

Duplicated and can be closed.
grafana/tempo#2646

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant