-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Uploading files to S3 from EC2 instances fails on some instance types #634
Comments
Actually, that issue was for downloads, this one is for uploads so it's likely a different issue. I'll need to look into this further. If you could provide a Also approximately how many 10M files are you uploading at once? |
I'm also having issues uploading to s3, but from my laptop (OSX mavericks). A 424KB file failed,
The error is the same as the original post:
I ran it with --debug, and here's the part that keeps on retrying:
|
We are using version 1.2.13. We have about 200 of these files to upload, but we are doing it in series. The problem always happens on the first one. Here is the debug log of the failure that you asked for: https://gist.github.com/benbc/8861033. |
Is there any update on this issue. I'm seeing Exactly the same issue trying to upload to a newly created bucket. |
Yes I believe this should be fixed now in version 1.3.0. The root cause I believe is the same as #544. Can you please try with the latest CLI version. If you're still seeing the issue, please attach a --debug logs and I'll repoen the issue. |
i'm still having this issue upload to s3, neither from my laptop ,nor from debian server .The file is about 100MB. I'm uploading one file at a time, with the following versions:
Here is the issue I get:
Here is the debug log:
|
Still seeing this issue using
|
As others have reported, this is now working fine after the TemporaryRedirect code clears --
|
I'm still encountering this problem with version 1.3.4 on a new Ireland bucket.
Debug log: https://gist.github.com/anonymous/9814978 |
Same bug for me when using 'aws s3 sync' My bucket is in ireland. debug log: https://paste.buntux.org/?c511ef0a27305636#e1/B5RiyBuZq60LHmLNpZt59wz2zDyY2KVdjCM+mO7E= edit: I tested on a bucket in "US Standard" region --> no problem |
I'm reopening this issue. After debugging this I can confirm what others have said. This problem exists when trying to upload a large file to a newly created bucket that's not in the classic region. From what I can tell the CLI is properly retrying requests and following 307 redirects. The problem, however, is that the CLI sends the entire request and then waits for a response. However, S3 will immediately send the 307 response before we've finished sending the body. Eventually it will just close the connection, and if we're still in the process of streaming the body, we will not see the response. Instead we get the ConnectionError as shown in the various debug logs above. The normal way to address this would be to use the expect 100 continue header. However, the HTTP client we use (requests) does not support this. There might be a way to work around this, but I'll need to do some digging into the requests library to see the best way to fix this issue. I'll update this issue when I know more. |
Any news? |
my tests show that even when the boto library is failing, the AWS CLI works fine. what is the AWS CLI doing differently? |
I don't think that there's any ways to workaround with in the requests library. I think our best option (aside from writing our own HTTP client that supports expect 100 continue) is some specific workaround in the S3 code. |
I've just hit this too. My file isn't large, just a few megabytes. |
Getting this with a 323KB CSS file. |
Working on a fix for this, will update when I have more info. |
Getting same issue when uploading large file (3GB) into S3. Uploading small file is just fine.
My aws version; |
I'm getting a similar issue intermittently (about 1 file in every 20, files are around 500MB) using the requests library directly as we're working through our specific storage API. This is uploading from an EC2 instance.
|
Just an update, still working on this. The most viable approach so far has been to subclass So I believe I have a potential fix, but it needs more testing. Will update when I have more info. |
@jamesls as soon as you have a public branch to test, I'll be using it :) |
Hi there, since its not explicitly said in the thread, you can still use the "aws" command to upload files to another region by specifying the --region param.
|
@inedit00 Thanks! I had no idea that this was related to the problem. |
The fix for this is here: boto/botocore#303 But I'd like to do more testing on this. I also want to see what sending the expect 100-continue header does to perf. |
We are getting this error also when copying a large file (~3GB) from local to S3 (Ireland). Using aws-cli/1.3.17 Python/2.6.6 Linux/2.6.32-71.29.1.el6.x86_64 and botocore 0.51.0. |
Just a quick update, the fix for this was in boto/botocore#303, however we caught a regression that was only in python2.6 so we had to revert the PR. I think it's still possible to fix this, but it will be slightly more invasive due to internal changes in httplib from python2.6 to >python2.6. I'll update this when I have more info. |
I was able to fix the python2.6 issue in the previous PR to fix this issue. All of the tests are passing with this change, and I can confirm that this fixes the issue of not being able to upload large files to newly created non-standard S3 buckets. Please let me know if you're still seeing issues, and thanks for your patience while we resolved this issue. |
Thanks for the fix @jamesls. Which awscli release is/will this fix be in? |
This is available in the latest version of the CLI (v1.3.18). Also, if anyone is needing to debug this in the future, you'll see debug logs related to the expect 100 continue header:
|
I believe this is still a problem per #823. |
That looks like a different problem. The errno is different, in particular. |
got the same issue.
Reaching the performance limit of AWS s3? |
I'm experiencing the same issue. This thread is rather old. Is this issue still being investigated?
|
Faced the same issue
|
Ditto. +1 For my setup this was resolved by setting: in the .aws/config file for the profile I was using. See here: https://docs.aws.amazon.com/cli/latest/topic/s3-config.html# |
I'm having the same issue as well, when I tried to upload a file to s3 bucket using boto3 python package. It used to work fine until about 4 days ago, and since then having problems with uploading. |
I am having the same issue with a newly created bucket in the eu-west-1 region. Command used:
|
Is this problem related to the network I'm connected to? The upload was failing when I had connected to a 4G dongle, whereas when I was connected to my wifi router, everything was fine. But, the 4G dongle was giving me 10Mbps speed. So, not sure where the problem is |
@uldall If you haven't already solved this, for me this was fixed by installing the latest version of the aws-cli - the latest version on apt-get is outdated and so you must install using pip. Here's the instructrions: |
On ubuntu 18 https://linuxhint.com/install_aws_cli_ubuntu/ |
* fix: Functional tests must run on localhost to work in Windows (aws#552) * fix: spacing typo in Log statement in start-lambda (aws#559) * docs: Fix syntax highlighting in README.md (aws#561) * docs: Change jest to mocha in Nodejs init README (aws#564) * docs: Fix @mhart link in README (aws#562) * docs(README): removed cloudtrail, added SNS to generate-event (aws#569) * docs: Update repo name references (aws#577) * feat(debugging): Fixing issues around debugging Golang functions. (aws#565) * fix(init): Improve current init samples around docs and fixes (aws#558) * docs(README): Update launch config to SAM CLI from SAM Local (aws#587) * docs(README): Update sample code for calling Local Lambda Invoke (aws#584) * refactor(init): renamed handler for camel case, moved callback call up (aws#586) * chore: aws-lambda-java-core 1.1.0 -> 1.2.0 for java sam init (aws#578) * feat(validate): Add profile and region options (aws#582) Currently, `sam validate` requires AWS Creds (due to the SAM Translator). This commits adds the ability to pass in the credientials through a profile that is configured through `aws configure`. * docs(README): Update README prerequisites to include awscli (aws#596) * fix(start-lambda): Remove Content-Type Header check (aws#594) * docs: Disambiguation "Amazon Kinesis" (aws#599) * docs: Adding instructions for how to add pyenv to your PATH for Windows (aws#600) * docs: Update README with small grammar fix (aws#601) * fix: Update link in NodeJS package.json (aws#603) * docs: Creating instructions for Windows users to install sam (aws#605) * docs: Adding a note directing Windows users to use pipenv (aws#606) * fix: Fix stringifying λ environment variables when using Python2 (aws#579) * feat(generate-event): Added support for 50+ events (aws#612) * feat(invoke): Add region parameter to all invoke related commands (aws#608) * docs: Breaking up README into separate files to make it easier to read (aws#607) * chore: Update JVM size params to match docker-lambda (aws#615) * feat(invoke): Invoke Function Without Parameters through --no-event (aws#604) * docs: Update advanced_usage.rst with clarification on --env-vars usage (aws#610) * docs: Remove an extra word in the sam packaging command (aws#618) * UX: Improves event names to reflect Lambda Event Sources (aws#619) * docs: Fix git clone typo in installation docs (aws#630) * docs(README): Callout go1.x runtime support (aws#631) * docs(installation): Update sam --version command (aws#634) * chore(0.6.0): SAM CLI Version bump (aws#635)
Hi, Seems I am still seeing this issue when downloading 10M small files with multithread. aws-cli/1.16.182 Python/3.8.10 Linux/5.4.0-81-generic botocore/1.12.172 |
We have a very strange problem. We are uploading large (10M) files from an EC2 instance to an S3 bucket in the same region. We use
aws s3 cp
for the upload and instance roles for authentication.If we use an
m1.small
EC2 instance the upload works fine. But if we increase the instance size (we have triedm1.large
andm3.large
) the upload fails.Here is the error we get:
This is completely reproducible -- we have never had such a upload succeed after tens of attempts. We have never seen the problem on an
m1.small
instance in hundreds of attempts.We ran into this problem with 10M files. We have found that it is reproducible down to about 1M. Much less than this and it works fine every time.
Any ideas about what is going on would be much appreciated.
The text was updated successfully, but these errors were encountered: