-
-
Notifications
You must be signed in to change notification settings - Fork 248
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added optional regional s3 endpoint. Added Example #17
Conversation
…e destroyed if required.
…e destroyed if required.
variable "use_regional_s3_endpoint" { | ||
type = "string" | ||
description = "When set to 'true' the s3 origin_bucket will use the regional endpoint address instead of the global endpoint address" | ||
default = "false" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this just to not introduce a change? Since the regional endpoint is an improvement shouldn't this be true always?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. Backwards compatibility and non-breaking changes are good.
But the README can be updated to say that they should prefer it.
It is backwards compatible though right since all that should happen is the origin on CF gets modified in place?
________________________________
From: Jamie Nelson <[email protected]>
Sent: Wednesday, May 30, 2018 10:14:10 PM
To: cloudposse/terraform-aws-cloudfront-s3-cdn
Cc: Kit Sunde; Comment
Subject: Re: [cloudposse/terraform-aws-cloudfront-s3-cdn] Added optional regional s3 endpoint. Added Example (#17)
@Jamie-BitFlight commented on this pull request.
________________________________
In variables.tf<#17 (comment)>:
@@ -46,6 +46,12 @@ variable "aliases" {
default = []
}
+variable "use_regional_s3_endpoint" {
+ type = "string"
+ description = "When set to 'true' the s3 origin_bucket will use the regional endpoint address instead of the global endpoint address"
+ default = "false"
Yeah. Backwards compatibility and non-breaking changes are good.
But the README can be updated to say that they should prefer it.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<#17 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AAC_RT5dccXeYed2QMYto8DPayfVNiJ8ks5t3qkygaJpZM4UTCl0>.
|
README.md
Outdated
@@ -85,6 +86,7 @@ This is a fundamental requirement of CloudFront, and you will need to request th | |||
| `origin_path` | `` | Element that causes CloudFront to request your content from a directory in your Amazon S3 bucket. Begins with `/`. CAUTION! Do not use bare `/` as `origin_path`. | No | | |||
| `parent_zone_id` | `` | ID of the hosted zone to contain this record (or specify `parent_zone_name`) | Yes | | |||
| `parent_zone_name` | `` | Name of the hosted zone to contain this record (or specify `parent_zone_id`) | Yes | | |||
| `use_regional_s3_endpoint` | `"false"` | Use a regional endpoint for the bucket instead of the global endpoint. Useful for speeding up the deploy process. | No | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Useful to reduce data latency
README.md
Outdated
|
||
## Workaround for Known Issues | ||
|
||
To use the regional endpoint name instead of the global bucket name in this module, then set `use_regional_s3_endpoint = "true"` in the module. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To use the regional endpoint name instead of the global bucket name in this module, set use_regional_s3_endpoint = "true"
example/main.tf
Outdated
@@ -0,0 +1,23 @@ | |||
resource "aws_route53_zone" "primary" { | |||
name = "cloudposse-example.com" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
example.cloudposse.com
example/main.tf
Outdated
namespace = "cp" | ||
stage = "dev" | ||
name = "app-cdn" | ||
aliases = ["assets.cloudposse-example.com"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assets.example.cloudposse.com
origin_force_destroy = "true" | ||
} | ||
|
||
resource "aws_s3_bucket_object" "index" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a good idea to put the HTML file into the bucket as an example
main.tf
Outdated
bucket = "${element(compact(concat(list(var.origin_bucket), aws_s3_bucket.origin.*.bucket)), 0)}" | ||
bucket_domain_name = "${format(var.bucket_domain_format, element(compact(concat(list(var.origin_bucket), aws_s3_bucket.origin.*.bucket)), 0))}" | ||
} | ||
# ec2-downloads-windows is a bucket owned by amazon that will perminantly exist |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm...
not sure i like it for a few reasons:
- The
ec2-downloads-windows
bucket can exist now, but can be changed later by AWS - Probably not a good idea to introduce this dependency - we'll have to maintain it
- It's related to Windows (not a big deal, we just work with Linux :)
- Data sources can introduce issue like race conditions etc.
- From the code below, looks like that all we need from
aws_s3_bucket.selected
is the region. I suggest we provide the region invar.region
as we do for many other modules. And removedata "aws_region" "current" {}
. In this case, this module will not make any decisions on its own about regions and other data sources. That will be done in top-level modules, and we know the region we want to deploy to.
@osterman what do you think about this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'll be able to chime in once I understand the primary driver for using it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- The region has to match where the bucket is. Or it doesn't work.
- During destruction if
var.bucket
is""
and the s3 bucket that would otherwise be created has already been destroyed. But for some reason terraform hits any other error during the destruction process. Thenlocal.bucket
can be left being""
. This causes a followingterraform destroy
to fail because thedata.aws_s3_bucket
doesn't have a bucket to provide a region for.
While testing the module for each variable, iapplied
anddestroyed
tens of times, sometimes I got errors which I fixed. And thats how this workaround came to be.
If there is no bucket, then temporarily use a permanent aws public bucket during the destruction step.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good, just a few comments
thanks guys
main.tf
Outdated
# ec2-downloads-windows is a bucket owned by amazon that will perminantly exist | ||
# It allows for the data source to be called during the destruction process without failing. | ||
# It doesn't get used for anything else. | ||
data "aws_s3_bucket" "selected" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Jamie-BitFlight can you explain the trick you're using?
main.tf
Outdated
# It allows for the data source to be called during the destruction process without failing. | ||
# It doesn't get used for anything else. | ||
data "aws_s3_bucket" "selected" { | ||
bucket = "${local.bucket == "" ? "ec2-downloads-windows" : local.bucket}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if we created a new variable and called it permanent_s3_bucket
that had a description
of what you have above. Then we can reference permanent_s3_bucket
everywhere we need the hack. I think this would make the hack very clear.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe use the aws-cli
bucket? That would maybe raise less eyebrows. https://docs.aws.amazon.com/cli/latest/userguide/awscli-install-bundle.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'm okay with the hack, just I think we should codify it in a variable name so that it's very clear what the symbolism of ec2-downloads-windows
is for. Also, maybe use the aws-cli
bucket instead.
what