-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aws s3 cp - large mutlipart uploads not uploading #454
Comments
Seeing the same problem here. File uploads <5 GB seem to work fine. Above that size, the same error message: A client error (RequestTimeout) occurred: Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed. |
Hello jimr550, Thanks for verifying, I was starting to think it might just be us. What's interesting is that if I run the command in the foreground rather than a cron it seems to work. We're now running aws-cli/1.2.3 and still seeing the same problem. |
Interesting ... seems it may be a more widespread issue with s3/aws: aws/aws-sdk-ruby#241 We're getting this in our cron output:
There are more RequestTimeout messages and then the process ends with This comment seems to be an explanation - aws/aws-sdk-ruby#241 (comment) "Amazon S3 is closing the http connection because it waited to long for data. This is caused when a request indicates the content-length will by X bytes, but then sends fewer. S3 keeps waiting for the remainder bytes that aren't coming and then eventually kills the connection." That seems to be backed up by the error/cron output. Is there a timeout that can be adjusted somewhere? |
Looking into this. Is this consistently happening with > 5GB files? Also where are you running the s3 command? If it's on an ec2 instance, could you share what instance type you're using? Trying to repro this and want to make sure I'm simulating as close as possible the same environment. |
I’m running the latest release of was cli on Mac OSX 10.9 Server. The file that fails is a 30 GB binary. It is quite reproducible, either using cp or sync. The same machine/software installation is able to sync a 30 GB local folder filled with 600 MB binaries without difficulty — a single operation. Whatever is happening, it does not appear related to the overall amount of data transferred. It seems confined to the size of a single object. Sorry I can be more helpful. Not presently using ec2. Jim On Nov 4, 2013, at 10:42 AM, James Saryerwinnie [email protected] wrote:
|
I'm running CentOS 6.4 and aws-cli/1.2.3 and trying to upload a 19GB file. From the handful of times I've run the command in the foreground it runs OK but when left as a cron job it fails every time. I'm attempting to sync with the Ireland based server at around 6am each day if that helps. |
This should now be fixed (via boto/botocore#172). Now we'll retry the 400 RequestTimeout errors. Verified that I'm able to upload large files on various network types, low bandwidth, high packet loss, etc without issue. |
Hello,
We're running the following command as a cron job:
However the files are never reaching their destination, we're getting a few messages like this:
A client error (RequestTimeout) occurred: Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.
and the output finishes like this:
Completed 680 of 680 part(s) with -12 file(s) remaining
Files never appear in our bucket and the multipart upload is removed (quite rightly since we don't want to be paying for failed part-uploads).
I've just updated to aws-cli/1.2.2 but I couldn't see anything in the changelog that might hint at a resolution. I'm not sure what's going on exactly but it seems that some parts are not uploading or timing out and while aws is keeping track of them it's not re-trying the parts before destroying the multipart upload.
Any thoughts?
The text was updated successfully, but these errors were encountered: