-
-
Notifications
You must be signed in to change notification settings - Fork 276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Missing info from README #6
Comments
+1 |
3 similar comments
+1 |
+1 |
+1 |
Ok so be gentle with me while I try to explain. I'll be the first one to admit I suck at both documentation and explaining this. Work in progress sort of speak.
Unfortunately it seems like workers calling nested workers causes jobs to be duplicated still (like in #10 ). If anyone want to take a stab at reproducing the problem in a test we should be able to fix it. |
I don't understand what exactly the expiration parameter does, either. Does it only affect jobs scheduled with If it affects |
@nberger it only affects jobs scheduled with |
@mhenrixon I wouldn't mind also a brief explanation in the README of exactly how the uniqueness is established. This would be nice so I don't have to dig through the code to ensure locking is performed properly. I would much rather take your word for it! That said, I noticed that A small explanation of the locking procedure and how it is thread safe would definitely help me, and I'm sure many others, gain even more confidence in using the gem. Any chance you could clarify it for me? Thanks! |
Hello everyone, I had a few questions about this gem and this issue thread seems to be somewhat centered around my questions. Essentially, if I have a worker ("DoStuff") and I queue a job for that worker by the following: DoStuff.perform_async("with unique argument") and then I run that same command again with the same arguments, I want it to not add the second instance of the job to the queue if the first instance has not completed yet. The way I've read the documentation, I expect the job not to be duplicated no matter how long it has been if a job with the same worker, queue, and arguments is still waiting to be processed. What I'm currently experiencing is that the job won't add to the queue if it's within the expiration time. However, if the first job has not been completed, but the unique job expiration time has passed (10 minutes, in my case) and I run it again, it does add the duplicate job even if the first one has not completed! Is this the expected behavior of the gem? If not, is there a configuration option I am missing? Here is an example of a worker and the options I'm using: class DoStuff
include Sidekiq::Worker
sidekiq_options :queue => "queue_1"
sidekiq_options unique: true, unique_job_expiration: 60 * 10
def perform(arguments)
# Do some unique things.
end
end Thanks, |
@tyetrask yeah that is sort of expected. I suggest you try something like sidekiq-throttler instead. That should better help you achieve what you want I am opening an issue for deciding on how to proceed with this. |
Hey @mhenrixon, thanks for the information! We needed to move forward with our project so we ended up writing the middleware that performed how we needed. I appreciate all of the work on this and will be keeping an eye on it in the future. Thanks again! |
I just found this project while googling how to make sure certain Sidekiq jobs are not executed multiple times.
sidekiq-unique-jobs
seems to do exactly that... awesome!I think there is some info missing in the README though, specifically:
HardWorker
and I callHardWorker.perform_async('bob', 5)
multiple times, that job should obviously only be queued once. But what if I callHardWorker.perform_async('bob', 5)
andHardWorker.perform_async('jane', 10)
? Are both those jobs queued? I suppose so but I'm not 100% sure.I think both these points (and possibly more) should be explained in the README.
I'm happy to prepare a pull request for it, if you answer my questions in here.
Thanks for your work on this!
The text was updated successfully, but these errors were encountered: