-
Notifications
You must be signed in to change notification settings - Fork 303
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Redeliver publisher confirms acks again and again from 0 till current tag #615
Comments
I don't see a solution that would not involve keeping track of the "most recently seen" delivery tag which would be reset upon connection recovery. Then we'd use that as the lower range boundary. How does that sound? I'm not sure if most users would appreciate being exposed to the |
Hi @michaelklishin Are you intresting in PR with little complex logic in |
I know when RabbitMQ would confirm N messages at once: when there are several of them confirmed by target queues since the last time a delivery tag was sent out. So indeed it would take a certain ingress message rate. I would be interested in a PR that makes handling of |
Don't reconfirm already acked messages when multiple is true. Fixes ruby-amqp#615
Morning. @michaelklishin please give us some comments |
I wrote some benchmarks and met some strage behaviour:
In confirm_select mode sometimes callback called with already acked delivery tag. I start to investigate and find this code:
https://github.com/ruby-amqp/bunny/blob/master/lib/bunny/channel.rb#L1792
When rabbitmq sends me an confirm with
multiple
equals to trueconfirmed_range_start
setted to@delivery_tag_offset + 1
. But @delivery_tag_offset is always 0 without network recoveries!So I have such results:
Send 16000 very small messages
ConfirmSelect callback called 119567 times - each delivery_tag was acked ~ 9 times.
Which workaround can I use? The main fix I see to pass tag|multiple|nack triplet to user in callback without changes, avoiding useless work to iterate through tags from beginning over and over
The text was updated successfully, but these errors were encountered: