You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Recently, to adapt the OBS which is a object storage system from HUAWEI Cloud, we did secondary development based on ChartMuseum. By the way, the OBS is most like Amazon S3.
Then, we found that after we pushed hundreds of chart to chartmuseum, the request to 'getChart' or 'getChartVersion' will take very long time (at least over 2 seconds, that is bad effect to UX) or cause timeout.
By checking the chartmuseum's log, we discovered the chartmuseum is traversing all the charts and regenerating the index-cache.yaml.
By further researching, we draw a conclusion that because of the difference of time accuracy which is returned from 'GetObjects' and 'ListObjects', and chartmuseum judge by comparing the last modified time whehter the chart has been updated, chartmuseum consider the chart has been updated so that chartmuseum need to traverse all the charts to regenerate index-cache.yaml.
In OBS, the method of ListObjects will return the time which is accurated to milliseconds(RFC3339Nano), but the method of GetObjects return second(RFC1123).
After that, we go to check the document of OBS and Amazon S3, and found that these two storage system always return the RFC3339Nano time format by request ListObjects and return RFC1123 by request GetObjects.
We found we can set the "storage.timestamptolerance" to avoid this bug. But we run the chartmuseum in the situation of multi-instance. And we have to set the "storage.timestamptolerance" to zero to ensure the accuracy of the data.
These two method all return object's MD5 called ETag.
So maybe compare the MD5 of files instead of time is a good idea to judge whether the chart is updated
Here are the chartmuseu's log
{"L":"DEBUG","T":"2021-02-22T12:44:48.340Z","M":"[7] Change detected between cache and storage","repo":"hundred_repo","reqID":"160a6029-b3fb-419d-9440-777d91b297f4"}
{"L":"DEBUG","T":"2021-02-22T12:44:48.340Z","M":"[7] Regenerating index.yaml","repo":"hundred_repo","reqID":"160a6029-b3fb-419d-9440-777d91b297f4"}
{"L":"DEBUG","T":"2021-02-22T12:44:48.436Z","M":"[7] Updating chart in index","repo":"hundred_repo","name":"test-chart6","version":"0.1.0","reqID":"160a6029-b3fb-419d-9440-777d91b297f4"}
{"L":"DEBUG","T":"2021-02-22T12:44:48.549Z","M":"[7] Updating chart in index","repo":"hundred_repo","name":"test-chart60","version":"0.1.0","reqID":"160a6029-b3fb-419d-9440-777d91b297f4"}
{"L":"DEBUG","T":"2021-02-22T12:44:48.623Z","M":"[7] Updating chart in index","repo":"hundred_repo","name":"test-chart69","version":"0.1.0","reqID":"160a6029-b3fb-419d-9440-777d91b297f4"}
{"L":"DEBUG","T":"2021-02-22T12:44:48.699Z","M":"[7] Updating chart in index","repo":"hundred_repo","name":"test-chart72","version":"0.1.0","reqID":"160a6029-b3fb-419d-9440-777d91b297f4"}
{"L":"DEBUG","T":"2021-02-22T12:44:48.848Z","M":"[7] Updating chart in index","repo":"hundred_repo","name":"test-chart79","version":"0.1.0","reqID":"160a6029-b3fb-419d-9440-777d91b297f4"}
{"L":"DEBUG","T":"2021-02-22T12:44:48.940Z","M":"[7] Updating chart in index","repo":"hundred_repo","name":"test-chart85","version":"0.1.0","reqID":"160a6029-b3fb-419d-9440-777d91b297f4"}
{"L":"DEBUG","T":"2021-02-22T12:44:49.024Z","M":"[7] Updating chart in index","repo":"hundred_repo","name":"test-chart88","version":"0.1.0","reqID":"160a6029-b3fb-419d-9440-777d91b297f4"}
{"L":"DEBUG","T":"2021-02-22T12:44:49.120Z","M":"[7] Updating chart in index","repo":"hundred_repo","name":"test-chart58","version":"0.1.0","reqID":"160a6029-b3fb-419d-9440-777d91b297f4"}
{"L":"DEBUG","T":"2021-02-22T12:44:49.195Z","M":"[7] Updating chart in index","repo":"hundred_repo","name":"test-chart92","version":"0.1.0","reqID":"160a6029-b3fb-419d-9440-777d91b297f4"}
{"L":"DEBUG","T":"2021-02-22T12:44:49.282Z","M":"[7] Updating chart in index","repo":"hundred_repo","name":"test-chart96","version":"0.1.0","reqID":"160a6029-b3fb-419d-9440-777d91b297f4"}
{"L":"DEBUG","T":"2021-02-22T12:44:49.398Z","M":"[7] Updating chart in index","repo":"hundred_repo","name":"test-chart99","version":"0.1.0","reqID":"160a6029-b3fb-419d-9440-777d91b297f4"}
{"L":"DEBUG","T":"2021-02-22T12:44:49.494Z","M":"[7] Updating chart in index","repo":"hundred_repo","name":"test-chart90","version":"0.1.0","reqID":"160a6029-b3fb-419d-9440-777d91b297f4"}
{"L":"DEBUG","T":"2021-02-22T12:44:49.583Z","M":"[7] Updating chart in index","repo":"hundred_repo","name":"test-chart51","version":"0.1.0","reqID":"160a6029-b3fb-419d-9440-777d91b297f4"}
{"L":"DEBUG","T":"2021-02-22T12:44:49.684Z","M":"[7] Updating chart in index","repo":"hundred_repo","name":"test-chart56","version":"0.1.0","reqID":"160a6029-b3fb-419d-9440-777d91b297f4"}
…
…
And the request cost 9.68 second
The text was updated successfully, but these errors were encountered:
Hello @Ericwww - this is a known issue, please see #152.
You can solve it by using the --storage-timestamp-tolerance flag. For example, to round to the nearest second, you could use --storage-timestamp-tolerance=1s. Please let us know if this fixes your problem.
@jdolitsky Thanks for your reply. We noticed that the --storage-timestamp-tolerance flag can help. At the beginning, to ensure the data from storage is consistent with cache, we thought the flag had to be zero. Now, we review the code and consider it does not matter. We will keep using chartmuseum with --storage-timestamp-tolerance=1s and focus on data consistency from storage and cache.
I am going to close this issue if you receive this comment.
Hi!
Recently, to adapt the OBS which is a object storage system from HUAWEI Cloud, we did secondary development based on ChartMuseum. By the way, the OBS is most like Amazon S3.
Then, we found that after we pushed hundreds of chart to chartmuseum, the request to 'getChart' or 'getChartVersion' will take very long time (at least over 2 seconds, that is bad effect to UX) or cause timeout.
By checking the chartmuseum's log, we discovered the chartmuseum is traversing all the charts and regenerating the index-cache.yaml.
By further researching, we draw a conclusion that because of the difference of time accuracy which is returned from 'GetObjects' and 'ListObjects', and chartmuseum judge by comparing the last modified time whehter the chart has been updated, chartmuseum consider the chart has been updated so that chartmuseum need to traverse all the charts to regenerate index-cache.yaml.
In OBS, the method of ListObjects will return the time which is accurated to milliseconds(RFC3339Nano), but the method of GetObjects return second(RFC1123).
After that, we go to check the document of OBS and Amazon S3, and found that these two storage system always return the RFC3339Nano time format by request ListObjects and return RFC1123 by request GetObjects.
We found we can set the "storage.timestamptolerance" to avoid this bug. But we run the chartmuseum in the situation of multi-instance. And we have to set the "storage.timestamptolerance" to zero to ensure the accuracy of the data.
These two method all return object's MD5 called ETag.
So maybe compare the MD5 of files instead of time is a good idea to judge whether the chart is updated
Here are the chartmuseu's log
And the request cost 9.68 second
The text was updated successfully, but these errors were encountered: