Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BucketRegionError: incorrect region #101

Closed
thomas-beznik opened this issue Sep 10, 2020 · 2 comments
Closed

BucketRegionError: incorrect region #101

thomas-beznik opened this issue Sep 10, 2020 · 2 comments
Labels
bug 🐛 An issue with the system

Comments

@thomas-beznik
Copy link

thomas-beznik commented Sep 10, 2020

Describe the Bug

I am getting the following error when using this module:

Error: Failed getting S3 bucket: BucketRegionError: incorrect region, the bucket is not in 'eu-central-1' region at endpoint ''
        status code: 301, request id: , host id:  Bucket: "aws-cli"

this error doesn't occur anymore if I delete my .terraform folder and the terraform.tfstate file.

Steps to Reproduce

This is the configuration that I'm using:

provider "aws" {
  region = var.region
}

/*
* CDN 
*/
module "cloudfront_s3_cdn" {
  acm_certificate_arn           = var.certificate_arn
  aliases                       = ["${var.name}.${var.environment}.toothflow.com"]
  source                        = "cloudposse/cloudfront-s3-cdn/aws"
  version                       = "0.34.0"
  stage                         = var.environment
  name                          = "${var.name}"
  override_origin_bucket_policy = true
  default_root_object           = "index.html"
  compress                      = true
  price_class                   = "PriceClass_100"
  use_regional_s3_endpoint      = true 
  origin_force_destroy          = true 
  cors_allowed_headers          = ["*"]
  cors_allowed_methods          = ["GET", "HEAD"]
  cors_allowed_origins          = ["${var.name}.${var.environment}.toothflow.com"]
  cors_expose_headers           = ["ETag"]
  viewer_protocol_policy        = "redirect-to-https"
}

Additional

/*
* S3 Bucket 
*/
resource "aws_s3_bucket_object" "bucket-s3" {
  bucket       = module.cloudfront_s3_cdn.s3_bucket
  key          = "${var.name}-${var.environment}"
  content_type = "text/html"
  acl          = "public-read"
}

/*
* Route53 Record 
*/
resource "aws_route53_record" "route53-record" {
  zone_id         = var.S3_zone_id
  name            = "${var.domain}.${var.environment}.toothflow.com"
  type            = "CNAME"
  ttl             = "300"
  records         = [module.cloudfront_s3_cdn.cf_domain_name]
  allow_overwrite = true
}

/*
* IAM Programmatic Access to S3 Bucket 
*/

// IAM User
resource "aws_iam_user" "S3_access_user" {
  name = "S3-${var.name}-${var.environment}"
  path = "/"
  force_destroy = true
}

// IAM Policy
resource "aws_iam_user_policy" "S3_access_policy" {
  name   = "S3-access-policy-${var.name}-${var.environment}"
  user   = aws_iam_user.S3_access_user.name
  policy = <<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "${module.cloudfront_s3_cdn.s3_bucket_arn}"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:PutObjectAcl",
                "s3:GetObject",
                "s3:GetObjectAcl",
                "s3:DeleteObject",
                "s3:ListMultipartUploadParts",
                "s3:AbortMultipartUpload"
            ],
            "Resource": [
                "${module.cloudfront_s3_cdn.s3_bucket_arn}/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "cloudfront:CreateInvalidation",
                "cloudfront:GetInvalidation",
                "cloudfront:ListInvalidations"
            ],
            "Resource": "*"
        }
    ]
}
EOF
}

// IAM Access Key
resource "aws_iam_access_key" "S3_access_key" {
  user = aws_iam_user.S3_access_user.name
}

Environment:

  • OS: OSX
  • Version: 10.15.6
@thomas-beznik thomas-beznik added the bug 🐛 An issue with the system label Sep 10, 2020
@nitrocode
Copy link
Member

nitrocode commented Mar 27, 2021

This is an interesting rabbit hole that I haven't completely figured out.

What is failing

Looks like it's failing to find the bucket for var.static_s3_bucket = "aws-cli" in the same region as the provider region eu-central-1.

data "aws_s3_bucket" "selected" {
bucket = local.bucket == "" ? var.static_s3_bucket : local.bucket
}

That's odd it's trying to use the var.static_s3_bucket instead of local.bucket since local.bucket should be filled by the aws_s3_bucket.origin.*.id here

bucket = join("",
compact(
concat([var.origin_bucket], concat([""], aws_s3_bucket.origin.*.id))
)
)

which should be fed by the module.origin_label.id

resource "aws_s3_bucket" "origin" {
#bridgecrew:skip=BC_AWS_S3_13:Skipping `Enable S3 Bucket Logging` check until bridgecrew will support dynamic blocks (https://github.com/bridgecrewio/checkov/issues/776).
#bridgecrew:skip=BC_AWS_S3_14:Skipping `Ensure all data stored in the S3 bucket is securely encrypted at rest` check until bridgecrew will support dynamic blocks (https://github.com/bridgecrewio/checkov/issues/776).
#bridgecrew:skip=CKV_AWS_52:Skipping `Ensure S3 bucket has MFA delete enabled` due to issue in terraform (https://github.com/hashicorp/terraform-provider-aws/issues/629).
count = local.using_existing_origin ? 0 : 1
bucket = module.origin_label.id

which is defined by the context

module "origin_label" {
source = "cloudposse/label/null"
version = "0.24.1"
context = module.this.context

which is defined here

module "this" {
source = "cloudposse/label/null"
version = "0.24.1" # requires Terraform >= 0.13.0
enabled = var.enabled
namespace = var.namespace
environment = var.environment
stage = var.stage
name = var.name

I don't understand why local.bucket is empty...

Workaround

You could provide another var.static_s3_bucket value of an existing s3 bucket to get around this issue but this is far less than ideal. Perhaps you can set it to the same s3 bucket as the one created for now ?

Why does var.static_s3_bucket exist ?

Current var.static_s3_bucket and how it's used. I don't understand what the edge case is.

variable "static_s3_bucket" {
type = string
default = "aws-cli"
description = <<DOC
aws-cli is a bucket owned by amazon that will perminantly exist.
It allows for the data source to be called during the destruction process without failing.
It doesn't get used for anything else, this is a safe workaround for handling the fact that
if a data source like the one `aws_s3_bucket.selected` gets an error, you can't continue the terraform process
which also includes the 'destroy' command, where is doesn't even need this data source!
Don't change this bucket name, it's a variable so that we can provide this description.
And this works around a problem that is an edge case.
DOC
}

data "aws_s3_bucket" "selected" {
bucket = local.bucket == "" ? var.static_s3_bucket : local.bucket
}

and this is pretty confusing

bucket_domain_name = (var.use_regional_s3_endpoint || var.website_enabled) ? format(
var.website_enabled ? "%s.s3-website%s%s.amazonaws.com" : "%s.s3%s%s.amazonaws.com",
local.bucket,
(var.website_enabled && contains(local.regions_s3_website_use_dash, data.aws_s3_bucket.selected.region)) ? "-" : ".",
data.aws_s3_bucket.selected.region,
) : format(var.bucket_domain_format, local.bucket)

In human words (probably should be commented in code to some degree)

  • if var.use_regional_s3_endpoint OR var.website_enabled
    • if website is enabled
      • use the format {bucket}.s3-website{delimiter}{region}.amazonaws.com
    • otherwise use {bucket}.s3{delimiter}{region}.amazonaws.com
    • first string is local.bucket
    • second string is a conditional delimiter
      • if website is enabled and the data.aws_s3_bucket.selected.region is either us-east-1, us-west-1, us-west-2, ap-southeast-1, ap-southeast-2, ap-northeast-1, sa-east-1
        • string is -
      • otherwise string is "."
    • last string is data.aws_s3_bucket.selected.region
  • otherwise string is var.bucket_domain_format, local.bucket
    • use the format {bucket}.s3.amazonaws.com

The local.bucket_domain_name is used as the cf distribution's origin's domain_name

origin {
domain_name = local.bucket_domain_name

This was the original PR #17 that added the var.static_s3_bucket variable and it had to do with var.use_regional_s3_endpoint and was a workaround for terraform destroy not working as expected... ?

Going forward

It looks like perhaps we can set the origin domain_name to the aws_s3_bucket.origin.bucket_regional_domain_name

  origin {
    domain_name = var.origin_bucket ? data.aws_s3_bucket.selected.bucket_regional_domain_name : aws_s3_bucket.origin.bucket_regional_domain_name

And then we can change the s3 bucket data source to be the origin bucket

data "aws_s3_bucket" "selected" {
  count  = var.origin_bucket ? 1 : 0
  bucket = var.origin_bucket
}

which may allow us to remove a lot of old code.

@nitrocode
Copy link
Member

This issue should be resolved with the latest version. If it does not, please reopen this ticket.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 An issue with the system
Projects
None yet
Development

No branches or pull requests

2 participants