-
Notifications
You must be signed in to change notification settings - Fork 664
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trying to access array offset on value of type null #924
Comments
Looks like Laravel is unable to json decode the job on the queue. You may have perhaps used lrange prefixqueues:default 0 -1 Replace prefix above with your Redis connection prefix. |
Also please post your job that's causing this. |
Hey there, Can you first please try one of the support channels below? If you can actually identify this as a bug, feel free to report back and I'll gladly help you out and re-open this issue. Thanks! |
The issue appeared again today. Starting at 1am, I got these error messages:
I understand there is something closing or locking temporary my MySQL and Redis at this specific time, I'm investigating on that. but I need to restart |
@dimitribocquet If you end up finding a solution for this, I would love to hear about it. I am currently seeing the same issue with a production application of ours. @driesvints I noticed another closed issue that seems to be the same (#919). I've traced my error back to the id() method in Line 48 in b3fba0d
Unfortunately I am currently still running Laravel 7 and Horizon 5.4.0 but I can upgrade shortly and provide any details needed to help troubleshoot this issue. I apologize in advance for bringing this straight here but I have not been able to locate any good information within any of the other support channels! |
@aap-tim I have no idea how and when this bug appears. Sometimes it's at 1AM, sometimes at midnight, 0.30AM... Then I thought it was my custom backup shell scripts, so I disabled all of them. But the bug is still there. BTW I use Redis with password protected, I don't know if it's relevant to say. For now, I hotfix that my restarting |
@aap-tim I always disptach my jobs with SendPasswordUpdatedEmail::dispatch($user); |
I can confirm we also have this issue and it is happens nearly every time in our maintenance window where redis is rebootet. Some of the workers get 'stuck' (while others gracefully recover) with exactly this error. Until now we didn't find the root issue but for us it seems like its about when network connections are shaky or redis disappears shortly. Will update if we find anything new. |
Just adding some info in case is useful to debug this issue. I can confirm the issue with Laravel 8.29.0 and Horizon 5.7.0. I'm using Redis 5.0.7 and PHP 7.4.15 with php-redis version 5.3.2. Not sure if useful, but we're using phpredis driver, on a Redis server which is password protected and that hasn't been restarted on a long while. I've recently set up Sentry and got 7.7K events for that error on the last 24h. The exception itself isn't really useful but it looks to me like the jobs are somehow corrupted and the |
also an update from our side: We have switched from directly accessing redis to using a stunnel sidecar to offload ssl encryption. Since that change we have not encountered this issue since, so it is in my opinion pointing to some timing issue combined with shaky network. (@j3j5 are you behind ssl or not?) But until we can reproduce this we're sadly tapping in the dark. |
I'm not connecting through SSL, two different servers on the same internal network (one for the worker, one for the redis queue). AFAICS our works are being processed properly so this is more an annoyance than an worrying issue (apart from the fact that I almost burnt the quota for Sentry on 1 day :P ). I'm wondering whether it is related to this issue phpredis/phpredis#1713 , I'm going to try tweaking the timeout values and I'll report back. I've seen laravel uses 0.0 as default timeouts |
I'm happy to report that adding a timeout of 1.0 (I've added both keys, The mystery remains as to why a timeout causes horizon to fall on an endless loop trying to access a wrongly decoded job, but so far this is good enough for me. I'll report back if I ever see it again. I didn't try different values bigger than 0, I went to 1.0 as it was what the person on the phpredis issue I posted above used, but I may try testing with different values. |
I can confirm that we also have a timeout of 1, and read_timeout of 5 seconds and currently we're not seeing that error, but sadly i can not say if it disappeared since we set those values or not. It remains a mystery. Too many variables! :) Happy however it is working for you. Long live the queues (with short lived jobs) |
@j3j5 So we can't have a Job taking more than 1 second? To me the purpose of dispatched Jobs is to handle long time tasks, that means we lost the interest of Jobs here. |
@dimitribocquet no, those are redis connection settings. So the timeout for redis calls to connect & process, this has nothing to do with the actual job process time (which your maximum is dependent on other factors) |
@graemlourens My bad, I was developing in |
Looks like everything is working well, thank you @j3j5 and @graemlourens ! |
Hi Where did you set Thank you |
@vlauciani in
|
Thank you! |
Hi, I know this is closed but I have opened a ticket that is redirecting me here. I have the same issue and I have added the timeout and read_timeout but I continue to get this error message once in a while. Supervisord are in a different datacenter than the REDIS server, that could explain the problem, but timeout seems to be still occurring. Here is an extract of my database.php config: `
` The only way I can make the issue disappear for a while after getting hundreds of error messages per hours is to log into the server where superversord is running and do Maybe I am missing something so hopefully someone will guide me in the right direction. Laravel ^8.0 Thanks |
@MickaelTH Change the values of
|
@dimitribocquet Thanks for your suggestion, but before I try it can you explain why my values are wrong first please? Thanks |
Just an update, I have changed the value to 1 and 5 but I still get the message:
Also could this be related to job dispatched in the shedule via |
We don't know why, but we know that having a too big timeout causes horizon to fall on an endless loop.
I have scheduled jobs, async jobs and everything work fine to me (after I updated my timeout values. |
Tried a lot of different values, restarted supervisord of course but still getting
The supervisord are in US and Redis in UK, no issue when using telnet to test. If anyone got another idea, that would be very helpful. |
We have experienced this issue before as well, but last night it went into new proportions making me want to search for a solution and I ended up here. Last night the connection to multiple Redis clusters, hosted through RedisLabs was lost for a short period of time, triggering this error basically every minute until I noticed and was able to restart the Horizon daemon this morning. We are starting investigations now, but at first sight this seems to occur for us if the daemon that is running Horizon is not able to make a connection to Redis for whatever reason and after that gets stuck into this. We have timeout and read_timeout both set to 5 in our config. |
@markvdputten I tried lot of combination from a US server to a UK redis server. This is so far my config, give it a try, I get less errors messages in the logs (where it was up to 5000+ per hours!) This is not a final solution, just a way to mitigate the issue. |
After struggling with this issue for some months, I finally managed to resolve it from my logs. The problem was caused by a queue without any jobs ever pushed on them. I manually pushed a job on the queue and I don't get any error anymore. (in case you are wondering why I had a queue that was never used, the issue was caused by a staging environment where some features that used the aforementioned queue were never used). I'm using Redis 6. I hope this info can help anyone to figure out the issue, I'm not familiar with the horizon codebase but I can try to navigate it and find a fix |
@MickaelTH You got any proper solution or do you still have same error after this setup? |
@mira-thakkar Using this config for 2.5 months now and I got the error message from 1 server 1 time only. I still believe in my architecture that this is not an horizon related problem but a timeout between server in US and Redis in UK. Every config/infrastructure is different so it is a matter of finding the correct value I guess. |
Thanks @MickaelTH |
This was basically my issue. But why were there no pushed jobs? Laravel uses LUA scripting with /** @var \Illuminate\Contracts\Redis\Factory $redis */
$redis = app(\Illuminate\Contracts\Redis\Factory::class);
$connection = $redis->connection();
$result = $connection->command('eval', ["return 1"]);
$error = $connection->client()->getLastError(); When it works,
So all I had to do was to make |
Description:
Horizon is working nicely for few days, and next I woke up in the morning and see all my Jobs are failed. When I check the logs, I got these:
config/horizon.php:
I had also this error on Laravel 7 / Horizon 4, so I upgrated to Laravel 8 / Horizon 5 with the hope that this error will disappear, in vain.
The text was updated successfully, but these errors were encountered: