Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added optional regional s3 endpoint. Added Example #17

Merged
merged 10 commits into from
Jun 26, 2018

Conversation

Jamie-BitFlight
Copy link
Contributor

@Jamie-BitFlight Jamie-BitFlight commented May 30, 2018

what

  • Add support for Regional Endpoint

variable "use_regional_s3_endpoint" {
type = "string"
description = "When set to 'true' the s3 origin_bucket will use the regional endpoint address instead of the global endpoint address"
default = "false"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this just to not introduce a change? Since the regional endpoint is an improvement shouldn't this be true always?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. Backwards compatibility and non-breaking changes are good.

But the README can be updated to say that they should prefer it.

@kitsunde
Copy link

kitsunde commented May 30, 2018 via email

@Jamie-BitFlight
Copy link
Contributor Author

It is. Lets see what @osterman and @aknysh think about it.

@osterman osterman requested a review from aknysh May 31, 2018 04:14
@osterman osterman added enhancement New feature or request feature request labels May 31, 2018
README.md Outdated
@@ -85,6 +86,7 @@ This is a fundamental requirement of CloudFront, and you will need to request th
| `origin_path` | `` | Element that causes CloudFront to request your content from a directory in your Amazon S3 bucket. Begins with `/`. CAUTION! Do not use bare `/` as `origin_path`. | No |
| `parent_zone_id` | `` | ID of the hosted zone to contain this record (or specify `parent_zone_name`) | Yes |
| `parent_zone_name` | `` | Name of the hosted zone to contain this record (or specify `parent_zone_id`) | Yes |
| `use_regional_s3_endpoint` | `"false"` | Use a regional endpoint for the bucket instead of the global endpoint. Useful for speeding up the deploy process. | No |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Useful to reduce data latency

README.md Outdated

## Workaround for Known Issues

To use the regional endpoint name instead of the global bucket name in this module, then set `use_regional_s3_endpoint = "true"` in the module.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To use the regional endpoint name instead of the global bucket name in this module, set use_regional_s3_endpoint = "true"

example/main.tf Outdated
@@ -0,0 +1,23 @@
resource "aws_route53_zone" "primary" {
name = "cloudposse-example.com"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

example.cloudposse.com

example/main.tf Outdated
namespace = "cp"
stage = "dev"
name = "app-cdn"
aliases = ["assets.cloudposse-example.com"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assets.example.cloudposse.com

origin_force_destroy = "true"
}

resource "aws_s3_bucket_object" "index" {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a good idea to put the HTML file into the bucket as an example

main.tf Outdated
bucket = "${element(compact(concat(list(var.origin_bucket), aws_s3_bucket.origin.*.bucket)), 0)}"
bucket_domain_name = "${format(var.bucket_domain_format, element(compact(concat(list(var.origin_bucket), aws_s3_bucket.origin.*.bucket)), 0))}"
}
# ec2-downloads-windows is a bucket owned by amazon that will perminantly exist
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm...
not sure i like it for a few reasons:

  1. The ec2-downloads-windows bucket can exist now, but can be changed later by AWS
  2. Probably not a good idea to introduce this dependency - we'll have to maintain it
  3. It's related to Windows (not a big deal, we just work with Linux :)
  4. Data sources can introduce issue like race conditions etc.
  5. From the code below, looks like that all we need from aws_s3_bucket.selected is the region. I suggest we provide the region in var.region as we do for many other modules. And remove data "aws_region" "current" {}. In this case, this module will not make any decisions on its own about regions and other data sources. That will be done in top-level modules, and we know the region we want to deploy to.

@osterman what do you think about this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'll be able to chime in once I understand the primary driver for using it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. The region has to match where the bucket is. Or it doesn't work.
  2. During destruction if var.bucket is "" and the s3 bucket that would otherwise be created has already been destroyed. But for some reason terraform hits any other error during the destruction process. Then local.bucket can be left being "". This causes a following terraform destroy to fail because the data.aws_s3_bucket doesn't have a bucket to provide a region for.
    While testing the module for each variable, i applied and destroyed tens of times, sometimes I got errors which I fixed. And thats how this workaround came to be.
    If there is no bucket, then temporarily use a permanent aws public bucket during the destruction step.

aknysh
aknysh previously requested changes May 31, 2018
Copy link
Member

@aknysh aknysh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good, just a few comments
thanks guys

main.tf Outdated
# ec2-downloads-windows is a bucket owned by amazon that will perminantly exist
# It allows for the data source to be called during the destruction process without failing.
# It doesn't get used for anything else.
data "aws_s3_bucket" "selected" {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Jamie-BitFlight can you explain the trick you're using?

main.tf Outdated
# It allows for the data source to be called during the destruction process without failing.
# It doesn't get used for anything else.
data "aws_s3_bucket" "selected" {
bucket = "${local.bucket == "" ? "ec2-downloads-windows" : local.bucket}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we created a new variable and called it permanent_s3_bucket that had a description of what you have above. Then we can reference permanent_s3_bucket everywhere we need the hack. I think this would make the hack very clear.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe use the aws-cli bucket? That would maybe raise less eyebrows. https://docs.aws.amazon.com/cli/latest/userguide/awscli-install-bundle.html

Copy link
Member

@osterman osterman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'm okay with the hack, just I think we should codify it in a variable name so that it's very clear what the symbolism of ec2-downloads-windows is for. Also, maybe use the aws-cli bucket instead.

@osterman osterman dismissed aknysh’s stale review June 26, 2018 20:58

CR addressed

@osterman osterman merged commit 3e299e3 into cloudposse:master Jun 26, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants