Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EKS 1.14 and extenal-dns and helm chart not working as expected with assume role #1227

Closed
bsakweson opened this issue Oct 12, 2019 · 7 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@bsakweson
Copy link

I installed kube2iam and external-dns helm charts on my EKS k8s v1.14 following this tutorial but cannot get it to work as expected.

external-dns container logs:

time="2019-10-12T16:07:57Z" level=info msg="Created Kubernetes client https://10.100.0.1:443"
time="2019-10-12T16:07:59Z" level=error msg="NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors"

One of my externa-dns pods description:

Name:           external-dns-6988bd49b6-d9c44
Namespace:      kube-system
Priority:       0
Node:           ip-xxxxxxxxxx.ec2.internal/172.3.31.155
Start Time:     Sat, 12 Oct 2019 12:52:39 -0400
Labels:         app.kubernetes.io/instance=external-dns
                app.kubernetes.io/managed-by=Tiller
                app.kubernetes.io/name=external-dns
                helm.sh/chart=external-dns-2.9.0
                pod-template-hash=6988bd49b6
Annotations:    iam.amazonaws.com/role: arn:aws:sts::xxxxxxxxxx:assumed-role/k8s-alie-route53
                kubernetes.io/psp: eks.privileged
Status:         Running
IP:             xxxxxxxxxxx
Controlled By:  ReplicaSet/external-dns-6988bd49b6
Containers:
  external-dns:
    Container ID:  docker://6ffe84c8bac8382992aa2bdedb86cfe8ddfac57797e3491c627c57ca9360e387
    Image:         docker.io/bitnami/external-dns:0.5.17-debian-9-r0
    Image ID:      docker-pullable://bitnami/external-dns@sha256:1c683707537311e1a04d6236d522d55292c1b55bcf23a0721b9a2e21544454a3
    Port:          7979/TCP
    Host Port:     0/TCP
    Args:
      --log-level=info
      --domain-filter=xxxxxxxxx
      --policy=sync
      --provider=aws
      --registry=txt
      --interval=1m
      --txt-owner-id=xxxxxxxxxxx
      --source=ingress
      --aws-zone-type=public
      --aws-batch-change-size=1000
    State:          Running
      Started:      Sat, 12 Oct 2019 12:52:40 -0400
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:http/healthz delay=10s timeout=5s period=10s #success=1 #failure=2
    Readiness:      http-get http://:http/healthz delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment:
      AWS_DEFAULT_REGION:  us-east-1
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from external-dns-token-bng4w (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  external-dns-token-bng4w:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  external-dns-token-bng4w
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                                   Message
  ----    ------     ----  ----                                   -------
  Normal  Scheduled  77s   default-scheduler                      Successfully assigned kube-system/external-dns-6988bd49b6-d9c44 to ip-xxxxxxxxxx.ec2.internal
  Normal  Pulled     76s   kubelet, ip-xxxxxxxx.ec2.internal  Container image "docker.io/bitnami/external-dns:0.5.17-debian-9-r0" already present on machine
  Normal  Created    76s   kubelet, ip-xxxxxxxxxxxx.ec2.internal  Created container external-dns
  Normal  Started    76s   kubelet, ip-xxxxxxxxx.ec2.internal  Started container external-dns

Here are my policies:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    },
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "AWS": "${var.work_iam_role_arn}"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "route53:GetChange",
      "Resource": "arn:aws:route53:::change/*"
    },
    {
      "Effect": "Allow",
      "Action": "route53:ChangeResourceRecordSets",
      "Resource": "arn:aws:route53:::hostedzone/*"
    },
    {
      "Effect": "Allow",
      "Action": "route53:ListHostedZonesByName",
      "Resource": "*"
    }

Potential related issue is #1188

According that issue this might been fix with the master branch of couple of weeks ago. It seems as though that was included into version v0.5.17

Is that actually the case and if not what am I missing?

@njuettner
Copy link
Member

Can you check what you see in the logs of kube2iam? It seems people who are using EKS 1.14 do not have any issues after using the latest image (v0.5.17).

Maybe this tutorial helps you: https://medium.com/@marcincuber/amazon-eks-iam-roles-and-kube2iam-4ae5906318be. It was written by @marcincuber who was so kind sending us the PR to update the aws-sdk-go lib.

@marcincuber
Copy link

@bsakweson please see my new story which defines templates for both kube2iam and OIDC provider. https://medium.com/@marcincuber/amazon-eks-setup-external-dns-with-oidc-provider-and-kube2iam-f2487c77b2a1?sk=035c3dbf6cfeae3d830a79f31dabcb1a

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 22, 2020
@Rowern
Copy link

Rowern commented Jan 29, 2020

If anyone stumble on an issue where your pod is not getting the secret volume from an EKS managed cluster, try deleting your helm release or the resources, and re-create them.
I don't know why but serviceaccount annotation are not automatically updated by helm unless you re-create it...

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 28, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants