Skip to content

Commit

Permalink
Added optional regional s3 endpoint. Added Example (#17)
Browse files Browse the repository at this point in the history
* Added the ability to specify a regional endpoint. Updated README. Created working example.

* Added output in example that provided instructions for beginners

* Added origin_force_destroy to all s3 buckets so that the module can be destroyed if required.

* Fixed another destruction issue.

* Updated for consistency

* Changed the 'static' bucket

* added variable for static_s3_bucket
  • Loading branch information
Jamie-BitFlight authored and osterman committed Jun 26, 2018
1 parent 1537e47 commit 3e299e3
Show file tree
Hide file tree
Showing 9 changed files with 135 additions and 23 deletions.
10 changes: 4 additions & 6 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
# Compiled files
*.tfstate
*.tfstate.backup
*.terraform.tfstate*
**/*.tfstate
**/*.tfstate.backup
**/*.terraform.tfstate*
# Module directory
.terraform/

.idea
*.iml

**/.terraform
10 changes: 9 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ module "cdn" {
}
```

Full working example can be found in [example](./example) folder.

### Generating ACM Certificate

Expand All @@ -38,7 +39,9 @@ https://docs.aws.amazon.com/acm/latest/userguide/acm-regions.html
This is a fundamental requirement of CloudFront, and you will need to request the certificate in `us-east-1` region.


If there are warnings around the outputs when destroying using this module.
Then you can use this method for supressing the superfluous errors.
`TF_WARN_OUTPUT_ERRORS=1 terraform destroy`

## Variables

Expand Down Expand Up @@ -85,6 +88,7 @@ This is a fundamental requirement of CloudFront, and you will need to request th
| `origin_path` | `` | Element that causes CloudFront to request your content from a directory in your Amazon S3 bucket. Begins with `/`. CAUTION! Do not use bare `/` as `origin_path`. | No |
| `parent_zone_id` | `` | ID of the hosted zone to contain this record (or specify `parent_zone_name`) | Yes |
| `parent_zone_name` | `` | Name of the hosted zone to contain this record (or specify `parent_zone_id`) | Yes |
| `use_regional_s3_endpoint` | `"false"` | Use a regional endpoint for the bucket instead of the global endpoint. Useful for speeding up the deploy process caused by the s3 replication latency | No |


## Outputs
Expand All @@ -108,3 +112,7 @@ If the bucket is created in a region other than `us-east-1`, it will take a whil
> All buckets have at least two REST endpoint hostnames. In eu-west-1, they are example-bucket.s3-eu-west-1.amazonaws.com and example-bucket.s3.amazonaws.com. The first one will be immediately valid when the bucket is created. The second one -- sometimes referred to as the "global endpoint" -- which is the one CloudFront uses -- will not, unless the bucket is in us-east-1. Over a period of seconds to minutes, variable by location and other factors, it becomes globally accessible as well. Before that, the 307 redirect is returned. Hence, the bucket was not ready.
Via: https://stackoverflow.com/questions/38706424/aws-cloudfront-returns-http-307-when-origin-is-s3-bucket

## Workaround for Known Issues

To use the regional endpoint name instead of the global bucket name in this module, set `use_regional_s3_endpoint = "true"` in the module.
10 changes: 10 additions & 0 deletions example/index.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Your CDN is working</title>
</head>
<body>
<H1>Your CDN is working!</H1>
</body>
</html>
23 changes: 23 additions & 0 deletions example/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
resource "aws_route53_zone" "primary" {
name = "cloudposse.com"
force_destroy = "true"
}

module "cdn" {
source = "../"
namespace = "cp"
stage = "dev"
name = "app-cdn"
aliases = ["assets.cloudposse.com"]
parent_zone_id = "${aws_route53_zone.primary.zone_id}"
use_regional_s3_endpoint = "true"
origin_force_destroy = "true"
}

resource "aws_s3_bucket_object" "index" {
bucket = "${module.cdn.s3_bucket}"
key = "index.html"
source = "${path.module}/index.html"
content_type = "text/html"
etag = "${md5(file("${path.module}/index.html"))}"
}
39 changes: 39 additions & 0 deletions example/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
output "cf_id" {
value = "${module.cdn.cf_id}"
}

output "cf_arn" {
value = "${module.cdn.cf_arn}"
}

output "cf_status" {
value = "${module.cdn.cf_status}"
}

output "cf_domain_name" {
value = "${module.cdn.cf_domain_name}"
}

output "cf_etag" {
value = "${module.cdn.cf_etag}"
}

output "cf_hosted_zone_id" {
value = "${module.cdn.cf_hosted_zone_id}"
}

output "s3_bucket" {
value = "${module.cdn.s3_bucket}"
}

output "s3_bucket_domain_name" {
value = "${module.cdn.s3_bucket_domain_name}"
}

output "details" {
value = <<DOC
When the `cf_status` changes to `Deployed` from `InProgress`
(You can check it by doing an extra `terraform apply` every few minutes.)
The site can be viewed at https://${module.cdn.cf_domain_name}
DOC
}
10 changes: 10 additions & 0 deletions example/provider.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
provider "aws" {
region = "eu-west-2"

# Make it faster by skipping something
skip_get_ec2_platforms = true
skip_metadata_api_check = true
skip_region_validation = true
skip_credentials_validation = true
skip_requesting_account_id = true
}
32 changes: 18 additions & 14 deletions main.tf
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
module "origin_label" {
source = "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.3.3"
source = "git::https://github.com/cloudposse/terraform-terraform-label.git?ref=tags/0.1.2"
namespace = "${var.namespace}"
stage = "${var.stage}"
name = "${var.name}"
Expand Down Expand Up @@ -39,21 +39,24 @@ data "template_file" "default" {

vars {
origin_path = "${coalesce(var.origin_path, "/")}"
bucket_name = "${null_resource.default.triggers.bucket}"
bucket_name = "${local.bucket}"
}
}

resource "aws_s3_bucket_policy" "default" {
bucket = "${null_resource.default.triggers.bucket}"
bucket = "${local.bucket}"
policy = "${data.template_file.default.rendered}"
}

data "aws_region" "current" {}

resource "aws_s3_bucket" "origin" {
count = "${signum(length(var.origin_bucket)) == 1 ? 0 : 1}"
bucket = "${module.origin_label.id}"
acl = "private"
tags = "${module.origin_label.tags}"
force_destroy = "${var.origin_force_destroy}"
region = "${data.aws_region.current.name}"

cors_rule {
allowed_headers = "${var.cors_allowed_headers}"
Expand All @@ -65,7 +68,7 @@ resource "aws_s3_bucket" "origin" {
}

module "logs" {
source = "git::https://github.com/cloudposse/terraform-aws-s3-log-storage.git?ref=tags/0.1.3"
source = "git::https://github.com/cloudposse/terraform-aws-s3-log-storage.git?ref=tags/0.2.0"
namespace = "${var.namespace}"
stage = "${var.stage}"
name = "${var.name}"
Expand All @@ -76,10 +79,11 @@ module "logs" {
standard_transition_days = "${var.log_standard_transition_days}"
glacier_transition_days = "${var.log_glacier_transition_days}"
expiration_days = "${var.log_expiration_days}"
force_destroy = "${var.origin_force_destroy}"
}

module "distribution_label" {
source = "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.3.3"
source = "git::https://github.com/cloudposse/terraform-terraform-label.git?ref=tags/0.1.2"
namespace = "${var.namespace}"
stage = "${var.stage}"
name = "${var.name}"
Expand All @@ -88,15 +92,15 @@ module "distribution_label" {
tags = "${var.tags}"
}

resource "null_resource" "default" {
triggers {
bucket = "${element(compact(concat(list(var.origin_bucket), aws_s3_bucket.origin.*.bucket)), 0)}"
bucket_domain_name = "${format(var.bucket_domain_format, element(compact(concat(list(var.origin_bucket), aws_s3_bucket.origin.*.bucket)), 0))}"
}
data "aws_s3_bucket" "selected" {
bucket = "${local.bucket == "" ? var.static_s3_bucket : local.bucket}"
}

lifecycle {
create_before_destroy = true
}
locals {
bucket = "${join("", compact(concat(list(var.origin_bucket), concat(list(""),aws_s3_bucket.origin.*.bucket))))}"
region_endpoint = "${data.aws_s3_bucket.selected.region == "us-east-1" ? "s3" : "s3-${data.aws_s3_bucket.selected.region}" }"
bucket_domain_format = "${var.use_regional_s3_endpoint == "true" ? "%s.${local.region_endpoint}.amazonaws.com" : var.bucket_domain_format }"
bucket_domain_name = "${format(local.bucket_domain_format, local.bucket)}"
}

resource "aws_cloudfront_distribution" "default" {
Expand All @@ -116,7 +120,7 @@ resource "aws_cloudfront_distribution" "default" {
aliases = ["${var.aliases}"]

origin {
domain_name = "${null_resource.default.triggers.bucket_domain_name}"
domain_name = "${local.bucket_domain_name}"
origin_id = "${module.distribution_label.id}"
origin_path = "${var.origin_path}"

Expand Down
4 changes: 2 additions & 2 deletions outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,9 @@ output "cf_hosted_zone_id" {
}

output "s3_bucket" {
value = "${null_resource.default.triggers.bucket}"
value = "${local.bucket}"
}

output "s3_bucket_domain_name" {
value = "${null_resource.default.triggers.bucket_domain_name}"
value = "${local.bucket_domain_name}"
}
20 changes: 20 additions & 0 deletions variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,12 @@ variable "aliases" {
default = []
}

variable "use_regional_s3_endpoint" {
type = "string"
description = "When set to 'true' the s3 origin_bucket will use the regional endpoint address instead of the global endpoint address"
default = "false"
}

variable "origin_bucket" {
default = ""
}
Expand Down Expand Up @@ -190,3 +196,17 @@ variable "null" {
description = "an empty string"
default = ""
}

variable "static_s3_bucket" {
description = <<DOC
aws-cli is a bucket owned by amazon that will perminantly exist
It allows for the data source to be called during the destruction process without failing.
It doesn't get used for anything else, this is a safe workaround for handling the fact that
if a data source like the one `aws_s3_bucket.selected` gets an error, you can't continue the terraform process
which also includes the 'destroy' command, where is doesn't even need this data source!
Don't change this bucket name, its a variable so that we can provide this description.
And this works around a problem that is an edge case.
DOC

default = "aws-cli"
}

0 comments on commit 3e299e3

Please sign in to comment.