-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Enhancement] configure S3 client rename_file operation timeout #48860
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's better to fix change the configuration for all time-consuming s3 operations
config::object_storage_rename_file_request_timeout_ms = -1; | ||
config::object_storage_request_timeout_ms = 1000; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remember to reset the config var to default value, don't affect other test cases in case their correctness depends on the default config value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated
Why I'm doing: Currently, sr use object_storage_request_timeout_ms as the S3 client request timeout. But this parameter maybe too small in some case especially using by rename_file operation for large file (1GB, for example). Timeout maybe happen in this case What I'm doing: We introduce a new BE/CN config called object_storage_rename_file_request_timeout_ms and it is used as following: 1. If object_storage_rename_file_request_timeout_ms >= 0, it will used for rename_file operation in S3 as the request timeout. 2. If object_storage_rename_file_request_timeout_ms < 0, the timeout limit for rename_file operation in S3 will be determinded by object_storage_request_timeout_ms. Signed-off-by: srlch <[email protected]>
Signed-off-by: srlch <[email protected]>
Signed-off-by: srlch <[email protected]>
Signed-off-by: srlch <[email protected]>
Signed-off-by: srlch <[email protected]>
[FE Incremental Coverage Report]✅ pass : 0 / 0 (0%) |
[BE Incremental Coverage Report]❌ fail : 7 / 12 (58.33%) file detail
|
https://github.com/Mergifyio backport branch-3.3 |
✅ Backports have been created
|
Signed-off-by: srlch <[email protected]> (cherry picked from commit d2aed64)
…port #48860) (#49706) Co-authored-by: srlch <[email protected]>
Why I'm doing:
Currently, sr use object_storage_request_timeout_ms as the S3 client request timeout. But this parameter maybe too small in some case especially using by rename_file operation for large file (1GB, for example). Timeout maybe happen in this case
What I'm doing:
We introduce a new BE/CN config called
object_storage_rename_file_request_timeout_ms
and it is used as following:object_storage_rename_file_request_timeout_ms
>= 0, it will used for rename_file operation in S3 as the request timeout.object_storage_rename_file_request_timeout_ms
< 0, the timeout limit for rename_file operation in S3 will be determinded by object_storage_request_timeout_ms.Fixes #issue
What type of PR is this:
Does this PR entail a change in behavior?
If yes, please specify the type of change:
Checklist:
Bugfix cherry-pick branch check: