-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pod setup error when using AWS: "CredentialRequiresARNError" #1262
Comments
I am seeing exactly the same problem. This worked last week without a glitch. I am installing external dns via the helm chart. After downgrading the helm chart to the image 0.5.16-debian-9-r8 everything works again. Before I was also seeing this error:
|
Ran into the same issue. @frittenlab the downgrade worked for me too thanks. Its likely on of these PR's Will do a bisect if I get time |
Ran into same issue when installing external-dns chart by helm. Looking at external-dns pod log gives Can confirm version by @ffledgling worked for me too. Kubernetes version is resource "helm_release" "project-external-dns" {
name = "external-dns"
chart = "stable/external-dns"
version = "v2.6.1"
...
} |
can confirm this appears after helm version 2.10.1. |
Same here, works with Helm chart 2.10.1 and not with any later. Latest EKS version at the moment. |
Looks like the S3 terraform resource had a similar issue and HashiCorp had to patch their aws-sdk wrapper to grab credentials differently. Issue with helpful details: hashicorp/aws-sdk-go-base#4 Ticket was reopened later: hashicorp/terraform#22732 External DNS uses a command line flag to assume the role: Where the flag is consumed and the role should be assumed: external-dns/provider/aws_sd.go Lines 104 to 107 in f763d2a
Something worth trying: Add the role ARN to the AWS config :) Lol OK so my deployment isn't even trying to assume a role. OK, so there's a second block where it tries to do this which definitely is broken: For some reason it doesn't evaluate to the first block which would work. This block tries to use OK, so here's how it happens: Providing |
2.13.2 works! |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Apologies if this has been posted already, but I couldn't find much on this particular error condition, if anything. But I'm having such a difficult time setting up
external-dns
via Rancher within Kubernetes and I've tried pretty much everything I know at this point.As you can see below [ 1, 2 ], I keep getting this error with my
external-dns
pods when trying to setup a configuration to make use of AWS Route 53.[ 1 ] - https://paste.gekkofyre.io/view/8e550193
[ 2 ] - https://imgur.com/a/5NYkR5R
You can also find my configuration at the link just below, with obvious private information omitted for security purposes.
[ 3 ] - https://paste.gekkofyre.io/view/5dbf5822#k1Qz8yTVKFOJL1s6y16uONn9l2fyikMh
Please note that we're making use of Rancher v
2.3.2
to orchestrate our Kubernetes cluster, which consists of three nodes currently, plus the Rancher controller. There is plenty of RAM, vCPUs, and storage to go around so I can't see any of that being an issue.Our Kubernetes version is v
1.16.2-00
across all nodes and Docker itself is v5:18.09.9~3-0
as well. This is all running on the latest updated version of Ubuntu 18.04 LTS, which again is the same for all nodes and the Rancher controller itself. If anyone can offer assistance then that would be dearly appreciated, thank you.--
https://gekkofyre.io/
GekkoFyre Networks
The text was updated successfully, but these errors were encountered: