Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[minio storage] thanos store: compressed blocks stop working #552

Closed
wscheicher opened this issue Oct 4, 2018 · 6 comments
Closed

[minio storage] thanos store: compressed blocks stop working #552

wscheicher opened this issue Oct 4, 2018 · 6 comments

Comments

@wscheicher
Copy link

Thanos, Prometheus and Golang version used

thanos build date: 20180925-07:41:17
go version: go1.10.3
prometheus: 2.4.2
local minio storage (version from 20180925)

What happened

As soon as compressed blocks did reach compaction level 6 and ~160MB Size, thanos gives errors like:

No datapoints found.
receive series: rpc error: code = Aborted desc = fetch series for block 01CR98G8ECSA44NMNGKSQ6EJ4T: preload series: invalid remaining size 65536, expected 103779

and the whole range for that block is missing in the graphs.

What you expected to happen

queries and plots working without any errors or gaps, just as it did one comaction level before.

@bwplotka
Copy link
Member

bwplotka commented Oct 4, 2018

Nice. Maybe it's minio limit in size? (:

@wscheicher
Copy link
Author

no idea, all i can say is that thanos sidecar and thanos compact didn't have a problem writing to minio.

@bwplotka bwplotka changed the title thanos store: compressed blocks stop working [minio storage] thanos store: compressed blocks stop working Oct 4, 2018
@bwplotka
Copy link
Member

What's the overall size of minio storage? It looks like minio bug rather than Thanos

@wscheicher
Copy link
Author

When i filed the bug the tsdb used something around 400MB.
It was about one month of data.
Now size has tripled, filesystem still hast 15GB free space.

@bwplotka
Copy link
Member

bwplotka commented Oct 31, 2019

Looks like a duplicate of #271 (:

This might be caused by sync inconsistency between store and compactor for example.

@bwplotka
Copy link
Member

bwplotka commented Dec 6, 2019

Actually the reason might be similar to this: #146

We fetch fixed size series entries, so for series with an extremely large number of label pairs and chunks, we can have this problem. Looks like in this case 103779 bytes vs max 64*1024.

Let's keep this in our mind, once we have more reports/see this more often we should investigate other options like:

  • Retry with fetching more
  • Consider adding size info to our index-header or maybe even Posting in TSDB index.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants