-
-
Notifications
You must be signed in to change notification settings - Fork 276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
uniquejobs
hash doesn't get cleaned up
#195
Comments
We are seeing the same problem:
Periodically, we get a memory error with redis because of this:
And have to run a rake task to clear these jobs:
|
Attempting to fix [this bug](mhenrixon#195)
We've come across the same problem. I've tried to put together a more careful/verbose script (i.e. still not foolproof). It tries to do a few checks before actually deleting data. https://gist.github.com/riyad/9086d2b17ff1e8c091cdb1c7ac501b62 |
Relates to this issue in the original gem: mhenrixon#195
@mhenrixon idea how to fix this? we have been holding on upgrading to versions > 4.0.11 |
Any idea when this fix will be in a versioned release? |
So great that the fix is released! 🎉 We have deployed it and confirmed our memory footprint has stopped growing. Now we are turning our minds to clean-up. Has anyone else invested effort into a script that will look through redis and clean out orphaned |
I haven't looked into this yet. It would be totally awesome to have the possibility to clean up. Maybe I can find some time to do this over easter. Should be a matter of matching the hashes jids against the jids that are in redis but... there could be an easter 🥚 hidden somewhere. |
There is #195 (comment) but won't that wipe out |
@mhenrixon Any update on a clean up solution? Our Redis instance is sitting at a steady state of 600MB of bogus keys. I've been letting it sit there hoping I will "get something for free" to clean it out. Should there be a separate issue to track this? |
This reverts #927. Sidekiq-unique-jobs 5.x requires redis 3.x but our infrastructure uses 2.8. We also have to use the fork rather than a released version of 4.x because the last release of 4.x doesn't include a fix for mhenrixon/sidekiq-unique-jobs#195 which means the `uniquejobs` hash key in redis never gets smaller. Although there is a fix for this in 5.x (see: https://github.com/mhenrixon/sidekiq-unique-jobs/pulls/200 - this commit is what is on our fork of 4.x) it may have been changed to rely on expiry features of redis 3.x that are not available in redis 2.x. On staging this key is currently 6.5M entries, and consumes ~500Mb. On production it's only 2.5M entries and consumes ~200Mb. We're running a (much simplified version of) this script: https://gist.github.com/riyad/9086d2b17ff1e8c091cdb1c7ac501b62 in a screen session to remove any expired keys from this hash.
This reverts #927. Sidekiq-unique-jobs 5.x requires redis 3.x but our infrastructure uses 2.8. We also have to use the fork rather than a released version of 4.x because the last release of 4.x doesn't include a fix for mhenrixon/sidekiq-unique-jobs#195 which means the `uniquejobs` hash key in redis never gets smaller. Although there is a fix for this in 5.x (see: https://github.com/mhenrixon/sidekiq-unique-jobs/pulls/200 - this commit is what is on our fork of 4.x) it may have been changed to rely on expiry features of redis 3.x that are not available in redis 2.x. On staging this key is currently 6.5M entries, and consumes ~500Mb. On production it's only 2.5M entries and consumes ~200Mb. We tried running a (much simplified version of) this script: https://gist.github.com/riyad/9086d2b17ff1e8c091cdb1c7ac501b62 in a screen session to remove any expired keys from this hash, but unfortunately the rate of adding keys to the uniquejobs hash was greater than the rate of removal. Instead we waited until the queue was drained and deleted the key.
Sidekiq-unique-jobs 5.x requires redis 3.x but our infrastructure uses 2.8. We also have to use the fork rather than a released version of 4.x because the last release of 4.x doesn't include a fix for mhenrixon/sidekiq-unique-jobs#195 which means the `uniquejobs` hash key in redis never gets smaller. Although there is a fix for this in 5.x (see: https://github.com/mhenrixon/sidekiq-unique-jobs/pulls/200 - this commit is what is on our fork of 4.x) it may have been changed to rely on expiry features of redis 3.x that are not available in redis 2.x. On staging this key is currently 2.5M entries, and consumes ~200Mb. On production it's only 1.2M entries and consumes ~100Mb. We tried running a (much simplified version of) this script: https://gist.github.com/riyad/9086d2b17ff1e8c091cdb1c7ac501b62 in a screen session to remove any expired keys from this hash, but unfortunately the rate of adding keys to the uniquejobs hash was greater than the rate of removal. Instead we waited until the queue was drained and deleted the key.
Sidekiq-unique-jobs 5.x requires redis 3.x but our infrastructure uses 2.8. We also have to use the fork rather than a released version of 4.x because the last release of 4.x doesn't include a fix for mhenrixon/sidekiq-unique-jobs#195 which means the `uniquejobs` hash key in redis never gets smaller. Although there is a fix for this in 5.x (see: https://github.com/mhenrixon/sidekiq-unique-jobs/pulls/200 - this commit is what is on our fork of 4.x) it may have been changed to rely on expiry features of redis 3.x that are not available in redis 2.x. On staging this key is currently 2.5M entries, and consumes ~200Mb. On production it's only 1.2M entries and consumes ~100Mb. We tried running a (much simplified version of) this script: https://gist.github.com/riyad/9086d2b17ff1e8c091cdb1c7ac501b62 in a screen session to remove any expired keys from this hash, but unfortunately the rate of adding keys to the uniquejobs hash was greater than the rate of removal. Instead we waited until the queue was drained and deleted the key.
The versions we are using in one service is
We use both
until_executed
anduntil_timeout
. Theuniquejob*
keys are generated and cleaned up properly, but it leaves a copy of mapping inside theuniquejobs
hash, and it keeps growing. This problem started exactly at 4.1.2 (we tested various version around that time). And it seems still a problem with the latest version 4.1.8.The problem here is that the hash size grows to such an extent that it eventually fills up the redis memory. Normal metrics such as the number of keys don't help debug this issue.
The internal structure of the hash is
The text was updated successfully, but these errors were encountered: