We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
This is an issue to track the collector changes as a part of this bug
Found in the above issue.
The prometheus receiver's target allocator configuration populates new jobs for the collector.
errors in the above version
0.60.0
OS: Kubernetes
receivers: prometheus: config: global: scrape_interval: 30s scrape_configs: - job_name: dummy static_configs: - targets: - 127.0.0.1:8888 target_allocator: endpoint: http://ta-test-targetallocator interval: 30s collector_id: ${POD_NAME} http_sd_config: refresh_interval: 60s exporters: prometheus: endpoint: :9100 enable_open_metrics: true resource_to_telemetry_conversion: enabled: true service: pipelines: metrics: receivers: - prometheus exporters: - prometheus telemetry: metrics: address: 0.0.0.0:8888
2022-09-16T16:59:34.907Z info service/telemetry.go:115 Setting up own telemetry... 2022-09-16T16:59:34.908Z info service/telemetry.go:156 Serving Prometheus metrics {"address": "0.0.0.0:8888", "level": "basic"} 2022-09-16T16:59:34.910Z info service/service.go:112 Starting otelcol... {"Version": "0.60.0", "NumCPU": 4} 2022-09-16T16:59:34.910Z info extensions/extensions.go:42 Starting extensions... 2022-09-16T16:59:34.910Z info pipelines/pipelines.go:74 Starting exporters... 2022-09-16T16:59:34.910Z info pipelines/pipelines.go:78 Exporter is starting... {"kind": "exporter", "data_type": "metrics", "name": "prometheus"} 2022-09-16T16:59:34.910Z info pipelines/pipelines.go:82 Exporter started. {"kind": "exporter", "data_type": "metrics", "name": "prometheus"} 2022-09-16T16:59:34.910Z info pipelines/pipelines.go:86 Starting processors... 2022-09-16T16:59:34.910Z info pipelines/pipelines.go:98 Starting receivers... 2022-09-16T16:59:34.910Z info pipelines/pipelines.go:102 Receiver is starting... {"kind": "receiver", "name": "prometheus", "pipeline": "metrics"} 2022-09-16T16:59:34.911Z info pipelines/pipelines.go:106 Receiver started. {"kind": "receiver", "name": "prometheus", "pipeline": "metrics"} 2022-09-16T16:59:34.911Z info service/service.go:129 Everything is ready. Begin running and processing data. 2022-09-16T16:59:39.911Z error scrape/scrape.go:488 Creating target failed {"kind": "receiver", "name": "prometheus", "pipeline": "metrics", "scrape_pool": "/jobs/serviceMonitor%2Fdefault%2Fsm-test%2F0/targets", "error": "instance 0 in group http://ta-test-targetallocator/jobs/serviceMonitor%2Fdefault%2Fsm-test%2F0/targets?collector_id=ta-test-collector-0:1: scrape interval cannot be 0", "errorVerbose": "scrape interval cannot be 0\ngithub.com/prometheus/prometheus/scrape.PopulateLabels\n\tgithub.com/prometheus/[email protected]/scrape/target.go:446\ngithub.com/prometheus/prometheus/scrape.TargetsFromGroup\n\tgithub.com/prometheus/[email protected]/scrape/target.go:504\ngithub.com/prometheus/prometheus/scrape.(*scrapePool).Sync\n\tgithub.com/prometheus/[email protected]/scrape/scrape.go:486\ngithub.com/prometheus/prometheus/scrape.(*Manager).reload.func1\n\tgithub.com/prometheus/[email protected]/scrape/manager.go:222\nruntime.goexit\n\truntime/asm_amd64.s:1571\ninstance 0 in group http://ta-test-targetallocator/jobs/serviceMonitor%2Fdefault%2Fsm-test%2F0/targets?collector_id=ta-test-collector-0:1\ngithub.com/prometheus/prometheus/scrape.TargetsFromGroup\n\tgithub.com/prometheus/[email protected]/scrape/target.go:506\ngithub.com/prometheus/prometheus/scrape.(*scrapePool).Sync\n\tgithub.com/prometheus/[email protected]/scrape/scrape.go:486\ngithub.com/prometheus/prometheus/scrape.(*Manager).reload.func1\n\tgithub.com/prometheus/[email protected]/scrape/manager.go:222\nruntime.goexit\n\truntime/asm_amd64.s:1571"} github.com/prometheus/prometheus/scrape.(*scrapePool).Sync github.com/prometheus/[email protected]/scrape/scrape.go:488 github.com/prometheus/prometheus/scrape.(*Manager).reload.func1 github.com/prometheus/[email protected]/scrape/manager.go:222 2022-09-16T16:59:39.911Z error scrape/scrape.go:488 Creating target failed {"kind": "receiver", "name": "prometheus", "pipeline": "metrics", "scrape_pool": "/jobs/dummy/targets", "error": "instance 0 in group http://ta-test-targetallocator/jobs/dummy/targets?collector_id=ta-test-collector-0:0: scrape interval cannot be 0", "errorVerbose": "scrape interval cannot be 0\ngithub.com/prometheus/prometheus/scrape.PopulateLabels\n\tgithub.com/prometheus/[email protected]/scrape/target.go:446\ngithub.com/prometheus/prometheus/scrape.TargetsFromGroup\n\tgithub.com/prometheus/[email protected]/scrape/target.go:504\ngithub.com/prometheus/prometheus/scrape.(*scrapePool).Sync\n\tgithub.com/prometheus/[email protected]/scrape/scrape.go:486\ngithub.com/prometheus/prometheus/scrape.(*Manager).reload.func1\n\tgithub.com/prometheus/[email protected]/scrape/manager.go:222\nruntime.goexit\n\truntime/asm_amd64.s:1571\ninstance 0 in group http://ta-test-targetallocator/jobs/dummy/targets?collector_id=ta-test-collector-0:0\ngithub.com/prometheus/prometheus/scrape.TargetsFromGroup\n\tgithub.com/prometheus/[email protected]/scrape/target.go:506\ngithub.com/prometheus/prometheus/scrape.(*scrapePool).Sync\n\tgithub.com/prometheus/[email protected]/scrape/scrape.go:486\ngithub.com/prometheus/prometheus/scrape.(*Manager).reload.func1\n\tgithub.com/prometheus/[email protected]/scrape/manager.go:222\nruntime.goexit\n\truntime/asm_amd64.s:1571"} github.com/prometheus/prometheus/scrape.(*scrapePool).Sync github.com/prometheus/[email protected]/scrape/scrape.go:488 github.com/prometheus/prometheus/scrape.(*Manager).reload.func1 github.com/prometheus/[email protected]/scrape/manager.go:222 2022-09-16T16:59:39.911Z error scrape/scrape.go:488 Creating target failed {"kind": "receiver", "name": "prometheus", "pipeline": "metrics", "scrape_pool": "/jobs/serviceMonitor%2Fdefault%2Fsm-test%2F0/targets", "error": "instance 0 in group http://ta-test-targetallocator/jobs/serviceMonitor%2Fdefault%2Fsm-test%2F0/targets?collector_id=ta-test-collector-0:0: scrape interval cannot be 0", "errorVerbose": "scrape interval cannot be 0\ngithub.com/prometheus/prometheus/scrape.PopulateLabels\n\tgithub.com/prometheus/[email protected]/scrape/target.go:446\ngithub.com/prometheus/prometheus/scrape.TargetsFromGroup\n\tgithub.com/prometheus/[email protected]/scrape/target.go:504\ngithub.com/prometheus/prometheus/scrape.(*scrapePool).Sync\n\tgithub.com/prometheus/[email protected]/scrape/scrape.go:486\ngithub.com/prometheus/prometheus/scrape.(*Manager).reload.func1\n\tgithub.com/prometheus/[email protected]/scrape/manager.go:222\nruntime.goexit\n\truntime/asm_amd64.s:1571\ninstance 0 in group http://ta-test-targetallocator/jobs/serviceMonitor%2Fdefault%2Fsm-test%2F0/targets?collector_id=ta-test-collector-0:0\ngithub.com/prometheus/prometheus/scrape.TargetsFromGroup\n\tgithub.com/prometheus/[email protected]/scrape/target.go:506\ngithub.com/prometheus/prometheus/scrape.(*scrapePool).Sync\n\tgithub.com/prometheus/[email protected]/scrape/scrape.go:486\ngithub.com/prometheus/prometheus/scrape.(*Manager).reload.func1\n\tgithub.com/prometheus/[email protected]/scrape/manager.go:222\nruntime.goexit\n\truntime/asm_amd64.s:1571"} github.com/prometheus/prometheus/scrape.(*scrapePool).Sync github.com/prometheus/[email protected]/scrape/scrape.go:488 github.com/prometheus/prometheus/scrape.(*Manager).reload.func1 github.com/prometheus/[email protected]/scrape/manager.go:222
This issue is solely for tracking the collector changes required to make this work.
The text was updated successfully, but these errors were encountered:
Pinging code owners: @Aneurysm9 @dashpole. See Adding Labels via Comments if you do not have permissions to add labels yourself.
Sorry, something went wrong.
jaronoff97
Successfully merging a pull request may close this issue.
What happened?
Description
This is an issue to track the collector changes as a part of this bug
Steps to Reproduce
Found in the above issue.
Expected Result
The prometheus receiver's target allocator configuration populates new jobs for the collector.
Actual Result
errors in the above version
Collector version
0.60.0
Environment information
Environment
OS: Kubernetes
OpenTelemetry Collector configuration
Log output
Additional context
This issue is solely for tracking the collector changes required to make this work.
The text was updated successfully, but these errors were encountered: