-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
store panics with "slice bounds out of range" #3057
Comments
Ack, let's take a look! Thanks for reporting. |
Is it always reproducible? I wonder if #2805 is related. |
Yes, whenever I run a query over a longer timerange the store-0:
store-1:
I don't know is it's relevant to the case, but both we and #2805 are using a object storage solution from Cloudian. |
Still relevant I guess and to be checked, help wanted. |
We do have some other Thanos-instances using the same S3-endpoint (but another bucket) which do not panic. I've also started removing "suspect" blocks (we had some where chunk 1 to 5 was missing) from S3, but so far the effort has been fruitless. |
Hello 👋 Looks like there was no activity on this issue for the last two months. |
Closing for now as promised, let us know if you need this to be reopened! 🤗 |
Thanos, Prometheus and Golang version used:
prometheus/prometheus:v2.19.0
thanos/thanos:v0.14.0
Object Storage Provider:
S3
What happened:
When running a query with a 90 days timespan, the store panics.
Full logs to relevant components:
Logs from store
Blocks in S3 bucket
Anything else we need to know:
We are running the compactor with the following retention times:
--retention.resolution-raw=32d
--retention.resolution-5m=62d
--retention.resolution-1h=367d
And have enbled "--query.auto-downsampling" in querier
The text was updated successfully, but these errors were encountered: