-
-
Notifications
You must be signed in to change notification settings - Fork 266
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MaxMind GeoLite2 infinte download attempts #2021
Comments
Can you try opening a shell inside the container and running |
|
Actually, the loop stopped today, the last record is from this afternoon
|
Sometimes there are issues with some of their files, in which the metadata used by Shlink to determine if an update is needed, is wrong. It's probably what happened this time, which probably caused your instance to think an update was needed on every new request, and just re-download the same file again and again. I tried to update the file in my own instance, but it skipped that version, and the most recent one seems to be fine. I'm going to try to verify if this is the case. |
Thanks you, because they move us from 1000 downloads a day to 30 on free plan, so we must be careful. |
Ouch! Do you have some link where this is explained? I would like to reference it from the docs. If I manage to confirm this was the problem, I'll try to find some way to mitigate it. |
Here : https://comms.maxmind.com/daily-download-limit-decreasing-2 |
Just checked the file from the 9th of February, and the metadata is correct. Shlink should not have tried to download it over and over. The logic basically compares the GeoLite file's build time and checks if it's more than 35 days old, in which case it tries to download a new copy. This is done with concurrency in mind, so a lock is set until download ends, to avoid multiple downloads in parallel. Other potential reasons for this to happen are that there was not enough disk space to decompress the file after downloading it, or perhaps an issue with the system date that made Shlink think it was in the future. I'll keep this open for now to see if I can think in some way to make the process more resilient. |
Got the same bug last week and I also received download limit reached notification from MaxMind.
In my case the server has enough disk space to handle the file. I've restart the shlink service to see if it will work. |
Could any of you check if your instances have some log entry starting with |
Here's the log related to |
Yeah, that's basically showing that Shlink successfully downloaded new versions of the database on every visit, until it reached the API limit, and then all the instances of Unfortunately it does not explain why Shlink was still thinking a new instance was needed to be downloaded, when it had a fresh copy. The only solution I can think of is to change how Shlink decides when a new copy is needed. Potential options:
For context, the way it works now is that Shlink reads the database metadata, for a value that tells when was it build. If a certain a mount of days has passed (35, if I remember correctly), or the database does not exist at all, it tries to download it. It is very straightforward, has very low impact and keeps the GeoLite file as the single source of truth, which is convenient, but it is clearly not covering some particular scenario that I'm missing. |
There was a new report of this issue, but in there, it was mentioned this was happening with orphan visits specifically. I checked again the log provided here, and I noticed there are many attempts on downloading the database as a result of an orphan visit. I also see some attempts which do not seem to be linked with a particular request happening instants before it, though. @sparanoid could it be that you have some scheduled task to periodically download the GeoLite file, or that the logs were manipulated to remove sensitive information? |
I haven't looked too closely at the code, but it appears that you are downloading the file to a temporary file and then copying it to the final location. This could potentially result in a corrupted file if multiple requests are going at once. To prevent this, you could either write the file atomically or take out appropriate locks (or preferably both). In order to write the file atomically, you should download it to the same directory as the final file to ensure the file is on the same file system, decompress it, and then rename the file to the final file name. You would either want to take a lock to ensure that no other request is writing to the same temporary files at the same time or you would want to use random names for the temporary files. Some other thoughts:
Edit: I was looking at the code in shlink/module/CLI/src/GeoLite/GeolocationDbUpdater.php Lines 41 to 49 in e244b2d
I didn't look into how that locking works, but presumably it prevents multiple downloads at once. |
I'm having the same problem with my instance, which just started happening in the last few days. |
Yes, that's correct. That lock prevents multiple downloads in parallel. |
I have a suspicion of what could be the problem. There might be some stateful service somewhere down the dependency tree, that's keeping a reference to the old database file metadata, making every check resolve that the file is too old, resulting in a new download. |
@oschwald answering to your comments:
This is exactly how it's done.
I thought about this, but it would have to be several days off, so I think it's a negligible risk. If someone really has a system with a so messed up system time, I think it's reasonable that the expected solution in that case is to ask the admins to fix that, not to expect Shlink to work around the problem. Ultimately, any solution that does not make a lot of MaxMind API requests would be time based, one way or another, so there's not much that can be done here.
Then nothing can be done and GeoLite files won't be downloaded. It's an unfortunate limitation due to how GeoLite db files work. In any case, this already happened not long ago. The solution involved making sure Shlink only tries to write on its own |
I can confirm this is the problem. There's an unintentional stateful service that's reading the GeoLite file metadata when created, and holding it in memory, making every check think the database is too old. This is affecting all versions of Shlink, so I will try to backport it to v3.x if it's not too complex. |
I have just released version 4.0.2 and 3.7.4, both including the fix for this bug. |
Shlink version
3.7.3
PHP version
8.2
How do you serve Shlink
Docker image
Database engine
MariaDB
Database version
10.3.23
Current behavior
Hi, I did start a container on January the 6th (root-less, on 3.7.3 form c70cf1b37087581cfcb7963d74d6c13fbee8555a7b10aa4af0493e70ade41202 docker image) and it did the job well until the MaxMind monthly renewal on February the 9
Here is the logs from January the 6th until successful download of the initial GeoIP database
and then the next relevant logs are :
We are over 2000 downloads a day of GeoLite2-City prior to receiving a warning form MaxMind as early as 5:00 in the morning :
From https://www.maxmind.com/ history for the very first occurrence :
NB : 22 of February 16:54 Paris time, I will not restart the container until 23:00 Paris time if you need more logs and data, as I will avoid the bug over locking out our MaxMind account for February 23rd and so stop and spine a new container before midnight.
Expected behavior
Succeed on success download of MaxMind Db
How to reproduce
Running the container with MaxMind Setup over 30 days.
The text was updated successfully, but these errors were encountered: