Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

local-exec provisioner doesn't run with the configured AWS provider account #21983

Open
nergdron opened this issue Jul 4, 2019 · 4 comments
Open

Comments

@nergdron
Copy link

nergdron commented Jul 4, 2019

Summary: we have multiple AWS accounts configured for our users, one per deployment environment. We configure the Terraform AWS provider to use the right one for the right environment. However, when using the local-exec provisioner after creating a resource, the script provided to local-exec doesn't execute with the same AWS environment as

Terraform Version

Terraform v0.12.3

Terraform Configuration Files

Terraform resource with local-exec provisioner:

resource "aws_eks_cluster" "eks" {
  name                      = var.service_name
  enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
  provisioner "local-exec" {
    command = "${path.module}/setup.sh ${var.service_name}"
  }
  role_arn = aws_iam_role.eks.arn
  vpc_config {
    endpoint_private_access = true
    endpoint_public_access  = false
    security_group_ids      = [aws_security_group.eks.id]
    subnet_ids              = var.subnet_ids
  }
}

local-exec script for execution:

#!/bin/bash

aws eks list-clusters
aws eks wait cluster-active --name "$1"
aws eks update-kubeconfig --name "$1"

Expected Behavior

Correctly runs script and configures local environment with the new EKS cluster details for further configuration, which is necessary as long as there's no dynamic provider configuration (see: #4149).

Actual Behavior

Script fails because it's run with the default AWS config and not the credentials configured in the provider:

available clusters...
{
    "clusters": []
}

Waiter ClusterActive failed: No cluster found for name: eks-ci.

Steps to Reproduce

  1. configure an aws provider with an account other than the default.
  2. configure a resource with a local-exec provisioner which runs aws commands.
  3. watch it fail after running terraform init && terraform apply

References

@mildwonkey
Copy link
Contributor

Hi @nergdron !
Thanks for reporting this unexpected behavior. local-exec has no knowledge of anything outside of it's own block, so it doesn't have access to your AWS credentials. It can only use the default settings on your local machine.
I'll tag this issue so we can update the documentation to clarify the current capabilities, and also consider this as an enhancement.

For the time being, since you can include environment variables in provisioner blocks, you might be able to export your AWS credentials as terraform environment variables - though I realize this might not work in your environment. Please note that the following is untested pseudo-code, but I think it's a good start:

# SET AN ENV VAR - DON'T HARD CODE THIS 
variable "aws_access_key_id" { }

resource "aws_thing" "example" {
  provisioner "local_exec" {
    environment {
      AWS_ACCESS_KEY_ID = var.aws_access_key_id
    }
  }
}

@nergdron
Copy link
Author

nergdron commented Jul 5, 2019

Yeah, we can't use environment variables for this because all our accounts have 2FA. Using just env vars in TF fails because the AWS 2FA can't prompt, and if we force the prompt in our script, it'd break our automated workflows from CI. So we really do need it to use the provider as configured to assume the role in the target account.

Can you see any other approach we could take to use TF to configure EKS and then complete cluster configuration using a provisioner, keeping the role specified in the AWS provider? or any thoughts on when dynamic provider configuration will be done so we can fix all the issues we're currently hacking around because of that deficiency?

@hashibot hashibot removed the question label Aug 13, 2019
@poflynn
Copy link

poflynn commented Oct 25, 2019

See also hashicorp/terraform-provider-aws#8242

@sblask
Copy link

sblask commented Nov 6, 2020

I just ran into this and thought I leave my solution. We use aws-vault exec to create a session without assuming a role (so we can use MFA) and let terraform do all the assuming. The session only gives permission to IAM and STS so the local-exec of aws sns subscribe would fail. I made it work like this:

  provisioner "local-exec" {
    command = <<EOF
SESSION=$(aws sts assume-role --role-arn MY_ROLE_ARN --role-session-name terraform-local-exec)
env -u AWS_SECURITY_TOKEN bash -c "export AWS_ACCESS_KEY_ID=$(echo $SESSION | jq -r .Credentials.AccessKeyId); export AWS_SECRET_ACCESS_KEY=$(echo $SESSION | jq -r .Credentials.SecretAccessKey); export AWS_SESSION_TOKEN=$(echo $SESSION | jq -r .Credentials.SessionToken); aws sns subscribe --region ${local.region} --topic-arn ${self.arn} --protocol email --notification-endpoint MY_EMAIL"
EOF
  }

Unsetting the AWS_SECURITY_TOKEN is important as I got an error without. It's really quite hacky and a proper fix would better, but it works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants