Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Aws v4 refactor #113

Closed

Conversation

denissimonovski
Copy link

@denissimonovski denissimonovski commented Jul 4, 2022

what

  • Refactoring the module to use the S3 bucket module
  • Use an AWS provider > v4.9.x
  • Use terraform version > v1.0

why

  • So that we can continue to build newer features into this module

references

@denissimonovski denissimonovski requested review from a team as code owners July 4, 2022 12:31
@denissimonovski denissimonovski requested review from adamcrews and milldr and removed request for a team July 4, 2022 12:31
@denissimonovski
Copy link
Author

@Nuru I saw your comment here, and I made the refactoring to use the s3 module as you mentioned. Can you review the PR, and help me with the bridgecrew issue?

@denissimonovski
Copy link
Author

@nitrocode can you help me with this PR?

versions.tf Outdated Show resolved Hide resolved
versions.tf Show resolved Hide resolved
@nitrocode
Copy link
Member

/test all

@@ -2,53 +2,44 @@ data "aws_caller_identity" "default" {}

data "aws_region" "default" {}

resource "aws_s3_bucket" "cache_bucket" {
module "cache_bucket" {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually when we create a breaking change like this, we release a "minor" release but we also include with it some kind of "migration" document which we create by first applying the current release, then running through terraform state moves and other API changes, then we can apply the new version (this version) and make sure it returns "no changes".

Those steps are documented inside of docs/ and added to the README.yaml so other people will know how to safely migrate from version 0.x to version 0.y.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran the tests below just to make sure you were on the correct path with the current proposed changes but I think a migration doc is the right way to go.

cc: @Nuru

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a migration doc @nitrocode
Can you please review?

@Nuru Nuru added the major Breaking changes (or first stable release) label Jul 13, 2022
bateller
bateller previously approved these changes Jul 14, 2022
Nuru
Nuru previously requested changes Jul 14, 2022
Copy link

@Nuru Nuru left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for all your work.

I have requested several changes, hopefully self-explanatory. The main questions I have are

  1. Is AWS provider version 4.22 really required? I think 4.9 should be good enough, and want to allow for older versions where possible.
  2. Am I correct that the S3 bucket is only used for cache and should not contain data that cannot be easily replaced? If so, let's just recommend the easy upgrade path where the S3 bucket gets destroyed and recreated.

@@ -0,0 +1,14 @@
## Migration Notes for Dynamic Subnets v2.0
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix title

@@ -0,0 +1,14 @@
## Migration Notes for Dynamic Subnets v2.0

###   Breaking change: S3 cache bucket
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove   throughout

@@ -288,13 +288,13 @@ variable "logs_config" {
}

variable "extra_permissions" {
type = list
type = list(any)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
type = list(any)
type = list(string)


required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 2.0"
version = ">= 4.22.0"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What was changed added after 4.9.0 that requires a later version?

Suggested change
version = ">= 4.22.0"
version = ">= 4.9.0"

main.tf Outdated
count = module.this.enabled && local.create_s3_cache_bucket ? 1 : 0
bucket_name = local.cache_bucket_name_normalised

acl = true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
acl = true
acl = "private"

main.tf Outdated
enabled = true
id = "codebuildcache"
abort_incomplete_multipart_upload_days = 1
prefix = "/"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prefix is not a recognized element of lifecycle_configuration_rules. If needed, add to filter_and instead, but since prefix is / for the rule, no filter is needed.

Suggested change
prefix = "/"

versions.tf Outdated

required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 2.0"
version = ">= 4.22.0"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
version = ">= 4.22.0"
version = ">= 4.9.0"

Comment on lines 6 to 7
* Do not upgrade if using Terraform older than 1.0.
* Do not upgrade if you are using AWS provider older than 4.22.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does this require AWS Terraform provider version later than 4.9?

Suggested change
* Do not upgrade if using Terraform older than 1.0.
* Do not upgrade if you are using AWS provider older than 4.22.
This module now requires Terraform version 1.0 or later and
AWS Terraform provider version 4.9 or later.

Comment on lines +12 to +14
The upgrade is two faceted.
1. Update terraform to a version equal to or newer than version 1.0. You can follow the **[Upgrading to Terraform v1.0](https://www.terraform.io/language/upgrade-guides/1-0)** guide.
2. Update the AWS provider to the latest version. You can follow the S3 bucket modules **[Upgrading to v2.0](https://github.com/cloudposse/terraform-aws-s3-bucket/wiki/Upgrading-to-v2.0)** guide.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The upgrade is two faceted.
1. Update terraform to a version equal to or newer than version 1.0. You can follow the **[Upgrading to Terraform v1.0](https://www.terraform.io/language/upgrade-guides/1-0)** guide.
2. Update the AWS provider to the latest version. You can follow the S3 bucket modules **[Upgrading to v2.0](https://github.com/cloudposse/terraform-aws-s3-bucket/wiki/Upgrading-to-v2.0)** guide.
If you are using Terraform v1.0 or later, upgrading to v2 of this module without taking any other steps will delete your existing S3 bucket and create a new one. Since the S3 bucket is only used for caching data, this may be acceptable, and is certainly the easiest path forward.
If you want to preserve the existing S3 bucket and its data, you will need to use `terraform state mv` to move the existing resources to their new "addresses" in the Terraform state. You can find the details of how to do this in the S3 bucket module's **[Upgrading to v2.0](https://github.com/cloudposse/terraform-aws-s3-bucket/wiki/Upgrading-to-v2.0)** guide.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this really necessary?
If the bucket is not empty, terraform will error out on bucket deletion, so no data will be lost. If it is empty the bucket will be replaced, but at that state it shouldn't be a problem.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Nuru , can you take a look?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@denissimonovski I think it is necessary, because the bucket module and the resource this PR replaces it with had force_destroy set to true.

@Nuru
Copy link

Nuru commented Jul 15, 2022

/test all

@denissimonovski
Copy link
Author

@nitrocode can you please run the tests again?

@@ -1,6 +1,7 @@
variable "region" {
type = string
description = "AWS region"
default = ""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isnt right.

Please see the s3 bucket

https://github.com/cloudposse/terraform-aws-s3-bucket/blob/caf2af9a03f947ae51348e675146f6475ca228a3/examples/complete/main.tf#L1-L9

The awsutils provider needs to be set with a region in examples/complete/main.tf just like above

And only the examples/complete/versions.tf file needs to be updated with this

https://github.com/cloudposse/terraform-aws-s3-bucket/blob/caf2af9a03f947ae51348e675146f6475ca228a3/examples/complete/versions.tf#L13

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nitrocode the var.region on the provider is already set as you can see here and the region is in the examples .tfvars. And the awsutils provider is not required in this module at the moment. If you say I should put it in the example, shouldn't I put it in the module as well? Given that the example only deploys this module and nothing else.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By using the latest s3 bucket module the awsutils provider is needed

@denissimonovski
Copy link
Author

@nitrocode can you please re-run the tests now?

@denissimonovski
Copy link
Author

@nitrocode can you please re-run the tests?

@joe-niland
Copy link
Member

/test all

@denissimonovski
Copy link
Author

@joe-niland can you please re-run the tests?

@joe-niland
Copy link
Member

/test all

@denissimonovski
Copy link
Author

@joe-niland can you please re-run the tests again 🙏 ❔

@joe-niland
Copy link
Member

/test all

@nitrocode
Copy link
Member

Looks like the error on the list() function is due to an old module in examples/vpc/main.tf

module "subnets" {
source = "cloudposse/dynamic-subnets/aws"
version = "0.38.0"

You may need to bump the subnet and vpc modules. I would bump those to the latest and plan it locally to ensure you have the correct inputs set (It will be much faster doing that locally than bumping the version and requesting someone to rerun the tests).

@denissimonovski
Copy link
Author

@nitrocode @joe-niland I made the changes and tested them locally this time, and the plan was successful:
Plan: 42 to add, 0 to change, 0 to destroy. Can you please re-run the tests now?

@joe-niland
Copy link
Member

/test all

@denissimonovski
Copy link
Author

@nitrocode, @joe-niland it passed all the tests now. I'll be waiting for feedback or for a merge.

@joe-niland joe-niland requested a review from nitrocode August 11, 2022 22:58
@denissimonovski
Copy link
Author

@nitrocode when can we expect a review/merge on this?

Copy link
Member

@nitrocode nitrocode left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me.

cc: @joe-niland @Nuru in case there is anything I missed. I'll leave the merging up to you folks.

@denissimonovski
Copy link
Author

@joe-niland @Nuru when can we expect a review/merge on this?

@denissimonovski
Copy link
Author

@nitrocode @joe-niland @Nuru when can we expect a review/merge on this?

Copy link
Member

@joe-niland joe-niland left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@denissimonovski I've tried to address your last comment and also pointed out that we can remove the awsutil dependency now, as it has become optional in the s3-bucket module.

source = "hashicorp/aws"
version = ">= 4.9.0"
}
awsutils = {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@denissimonovski we can remove this (and the provider and other usage) now, as of this PR. Sorry for the run-around, but we may as well fix this now.

Comment on lines +12 to +14
The upgrade is two faceted.
1. Update terraform to a version equal to or newer than version 1.0. You can follow the **[Upgrading to Terraform v1.0](https://www.terraform.io/language/upgrade-guides/1-0)** guide.
2. Update the AWS provider to the latest version. You can follow the S3 bucket modules **[Upgrading to v2.0](https://github.com/cloudposse/terraform-aws-s3-bucket/wiki/Upgrading-to-v2.0)** guide.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@denissimonovski I think it is necessary, because the bucket module and the resource this PR replaces it with had force_destroy set to true.

@max-lobur
Copy link
Contributor

Superseded by AWS V5 Upgrade

@max-lobur max-lobur closed this Nov 10, 2023
@nitrocode
Copy link
Member

Related pr #126

Thanks @denissimonovski @max-lobur and the reviewers for making it possible to migrate to v4 and v5 provider versions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
major Breaking changes (or first stable release)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants