-
Notifications
You must be signed in to change notification settings - Fork 664
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[5.x] Adding scaling to balance = false #1473
Conversation
👍🏻 this is nice, as this way we don't get bloated with too many workers, but we can still make our queues more granular for straight visibility into each of them. |
What is the main difference between this and |
Well, |
Gotcha. I do wonder if we really need a new strategy for this if |
Well while this is true, there are several benefits of not spending the memory just to run the workers unnecessarily:
Also this brings the functionality of queue priority, so it needs to process on the order. I think the complexity added vs functionality might worth to more people. I'm already using this strategy on my project and only changing to it reduced 30% of memory usage in my case... Of course it will change depending on how your queues are designed. Another thing that would help on this matter which was discussed on other issues is starting workers on |
Should this just be the default behavior of |
Well, that is what i expected to be TBH... i agree, but isn't it a "BREAKING CHANGE"? Won't break anything but will change the behavior for everyone who uses it. I'm totally ok with that, or come up with a different name... |
I'm not sure if it's a breaking change. We never actually document the scaling behavior of |
well makes sense... i will try to change the PR later today or tomorrow tops. |
Thanks! Just mark as ready for review when you want me to take another look. |
@taylorotwell updated 👍 |
Thanks |
It might have been wise to rename the PR. Because now the change log on the Tag is talking about a new Single balance strategy that doesn't exist: https://github.com/laravel/horizon/releases/tag/v5.26.0 |
I renamed the PR but it won't update the release i think |
I updated all references. Thanks |
A bit late to the party here but this seems to have broken my use of Horizon with balance = false. I have a fairly large application which sometimes handles upwards of 200k jobs per hour on a single VPS. After updating Horizon to >= 5.26.0 it suddenly stopped processing jobs (or processed them EXTREMELY slow). Downgrading to 5.25.0 and everything is back to normal again. This is my setup: 'environments' => [
'production' => [
'supervisor-live' => [
'connection' => 'redis-live-battles',
'queue' => ['live'],
'balance' => 'false',
'minProcesses' => 1,
'maxProcesses' => 5,
'balanceMaxShift' => 15,
'balanceCooldown' => 1,
'tries' => 3,
'timeout' => 80,
],
'supervisor-default' => [
'connection' => 'redis',
'queue' => ['urgent', 'high', 'default', 'low'],
'balance' => 'false',
'minProcesses' => 1,
'maxProcesses' => 50,
'balanceMaxShift' => 15,
'balanceCooldown' => 1,
'tries' => 3,
'timeout' => 80,
],
], Unfortunately I am not familiar enough with the internals of Horizon to be able to debug this myself but i've narrowed it down to this PR. This is what my horizon dashboard looks like after deploying >= 5.26.0. Basically jobs just keep piling up without being processed across the different queues. |
Well, i also got a fairly big project and haven't noticed any difference on performance, processing ~15k jobs as usual... Did you check the job throughput? Is that different on versions? TBH i can't see a reason to impact performance, the only behavior difference is that it scales now... before it would always be at 50 process in your case... now it will scale from 1 to 50... but the way it process and how it process is exactly the same. Looking at your screenshot, it seems it scaled properly and it's using 50 processes... I would look at the job throughput / runtime... as maybe the jobs may be different. |
Just to add another comment. This affected our Horizon workers because we had an old config file where only
Horizon treats horizon/src/ProvisioningPlan.php Line 174 in 4d021d2
So basically it enabled scaling for all these workers which previously didn't scale. It didn't break anything for us, just took us a couple weeks to notice that our queues processing jobs a bit slower. |
Just jumping in here, I tried out setting |
I'm at a loss at what to do here. As soon as I upgrade to >= 5.26.0 horizon basically stops processing jobs with balance = false. It works flawlessly with 5.25.0. Currently this also stops me from updating laravel/framework since horizon 5.25.0 gives the following errors with Laravel >= 11.27.0: Declaration of Laravel\Horizon\RedisQueue::pop must be compatible with Illuminate\Queue\RedisQueue::pop. Any ideas? :) |
@vilhelmjosander A few things to consider:
I noticed this in my own platform, to try to illustrate it better, if you have 5 workers and each job takes 1 second and you have new jobs filling the urgent queue as quick as they're being processed, no queues below it will have enough workers. Does this help? |
@vilhelmjosander it's hard to help without a way to reproduce... I tried to reproduce here... played with the tests, and everything seems to be working as it should... Can you check the processes that are running? with And again, are you sure it's not processing? Because as stated above, it will process on this queue order... so all urgent first, then high, then default, then low... None of the queues are processed? or only the Of course we need to fix it, if something is wrong... but have you tried changing the balance? Maybe for your use case you do need to change to simple or auto |
the only thing i noticed which shouldn't impact anything, but who knows... is these supervisors uses different redis connections... so when you run it you only have the supervisor processes running, and no actual worker running? that's weird... as horizon doesn't support scaling to 0 workers... are you sure nothing pops into your logs? errors or anything? Really hard to help, i tried changing tests, trying to reproduce but not a lot of luck. Maybe you can setup a reproducible repo so we can jump in |
This introduces a new strategy for balance: single
What this does is, it still has scaling similar to
auto
, but instead of having one worker per queue, it still keeps all queues into the same process, following the order specified, similar to what you would have usingqueue:work --queue=a,b,c
TBH this was the behavior i expected when i started using horizon a long time ago...
I'm using it in my project and thought it would be useful for others...
I can open a PR on the Docs in case this gets accepted / merged.
Maybe the name isn't that clear, totally open to change.