-
-
Notifications
You must be signed in to change notification settings - Fork 131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Aws v4 refactor #113
Aws v4 refactor #113
Conversation
…m-aws-codebuild into aws-v4-refactor
@nitrocode can you help me with this PR? |
/test all |
@@ -2,53 +2,44 @@ data "aws_caller_identity" "default" {} | |||
|
|||
data "aws_region" "default" {} | |||
|
|||
resource "aws_s3_bucket" "cache_bucket" { | |||
module "cache_bucket" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Usually when we create a breaking change like this, we release a "minor" release but we also include with it some kind of "migration" document which we create by first applying the current release, then running through terraform state moves and other API changes, then we can apply the new version (this version) and make sure it returns "no changes".
Those steps are documented inside of docs/ and added to the README.yaml so other people will know how to safely migrate from version 0.x to version 0.y.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ran the tests below just to make sure you were on the correct path with the current proposed changes but I think a migration doc is the right way to go.
cc: @Nuru
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a migration doc @nitrocode
Can you please review?
…ebuild into aws-v4-refactor
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for all your work.
I have requested several changes, hopefully self-explanatory. The main questions I have are
- Is AWS provider version 4.22 really required? I think 4.9 should be good enough, and want to allow for older versions where possible.
- Am I correct that the S3 bucket is only used for cache and should not contain data that cannot be easily replaced? If so, let's just recommend the easy upgrade path where the S3 bucket gets destroyed and recreated.
docs/migration-v1-v2.md
Outdated
@@ -0,0 +1,14 @@ | |||
## Migration Notes for Dynamic Subnets v2.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix title
docs/migration-v1-v2.md
Outdated
@@ -0,0 +1,14 @@ | |||
## Migration Notes for Dynamic Subnets v2.0 | |||
|
|||
### Breaking change: S3 cache bucket |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove
throughout
examples/bitbucket/variables.tf
Outdated
@@ -288,13 +288,13 @@ variable "logs_config" { | |||
} | |||
|
|||
variable "extra_permissions" { | |||
type = list | |||
type = list(any) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
type = list(any) | |
type = list(string) |
examples/complete/versions.tf
Outdated
|
||
required_providers { | ||
aws = { | ||
source = "hashicorp/aws" | ||
version = ">= 2.0" | ||
version = ">= 4.22.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What was changed added after 4.9.0 that requires a later version?
version = ">= 4.22.0" | |
version = ">= 4.9.0" |
main.tf
Outdated
count = module.this.enabled && local.create_s3_cache_bucket ? 1 : 0 | ||
bucket_name = local.cache_bucket_name_normalised | ||
|
||
acl = true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
acl = true | |
acl = "private" |
main.tf
Outdated
enabled = true | ||
id = "codebuildcache" | ||
abort_incomplete_multipart_upload_days = 1 | ||
prefix = "/" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
prefix
is not a recognized element of lifecycle_configuration_rules
. If needed, add to filter_and
instead, but since prefix is /
for the rule, no filter is needed.
prefix = "/" |
versions.tf
Outdated
|
||
required_providers { | ||
aws = { | ||
source = "hashicorp/aws" | ||
version = ">= 2.0" | ||
version = ">= 4.22.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
version = ">= 4.22.0" | |
version = ">= 4.9.0" |
docs/migration-v1-v2.md
Outdated
* Do not upgrade if using Terraform older than 1.0. | ||
* Do not upgrade if you are using AWS provider older than 4.22. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does this require AWS Terraform provider version later than 4.9?
* Do not upgrade if using Terraform older than 1.0. | |
* Do not upgrade if you are using AWS provider older than 4.22. | |
This module now requires Terraform version 1.0 or later and | |
AWS Terraform provider version 4.9 or later. |
The upgrade is two faceted. | ||
1. Update terraform to a version equal to or newer than version 1.0. You can follow the **[Upgrading to Terraform v1.0](https://www.terraform.io/language/upgrade-guides/1-0)** guide. | ||
2. Update the AWS provider to the latest version. You can follow the S3 bucket modules **[Upgrading to v2.0](https://github.com/cloudposse/terraform-aws-s3-bucket/wiki/Upgrading-to-v2.0)** guide. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The upgrade is two faceted. | |
1. Update terraform to a version equal to or newer than version 1.0. You can follow the **[Upgrading to Terraform v1.0](https://www.terraform.io/language/upgrade-guides/1-0)** guide. | |
2. Update the AWS provider to the latest version. You can follow the S3 bucket modules **[Upgrading to v2.0](https://github.com/cloudposse/terraform-aws-s3-bucket/wiki/Upgrading-to-v2.0)** guide. | |
If you are using Terraform v1.0 or later, upgrading to v2 of this module without taking any other steps will delete your existing S3 bucket and create a new one. Since the S3 bucket is only used for caching data, this may be acceptable, and is certainly the easiest path forward. | |
If you want to preserve the existing S3 bucket and its data, you will need to use `terraform state mv` to move the existing resources to their new "addresses" in the Terraform state. You can find the details of how to do this in the S3 bucket module's **[Upgrading to v2.0](https://github.com/cloudposse/terraform-aws-s3-bucket/wiki/Upgrading-to-v2.0)** guide. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this really necessary?
If the bucket is not empty, terraform will error out on bucket deletion, so no data will be lost. If it is empty the bucket will be replaced, but at that state it shouldn't be a problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Nuru , can you take a look?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@denissimonovski I think it is necessary, because the bucket module and the resource this PR replaces it with had force_destroy
set to true.
/test all |
@nitrocode can you please run the tests again? |
examples/complete/variables.tf
Outdated
@@ -1,6 +1,7 @@ | |||
variable "region" { | |||
type = string | |||
description = "AWS region" | |||
default = "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isnt right.
Please see the s3 bucket
The awsutils provider needs to be set with a region in examples/complete/main.tf just like above
And only the examples/complete/versions.tf file needs to be updated with this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nitrocode the var.region on the provider is already set as you can see here and the region is in the examples .tfvars
. And the awsutils provider is not required in this module at the moment. If you say I should put it in the example, shouldn't I put it in the module as well? Given that the example only deploys this module and nothing else.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By using the latest s3 bucket module the awsutils provider is needed
@nitrocode can you please re-run the tests now? |
@nitrocode can you please re-run the tests? |
/test all |
…m-aws-codebuild into aws-v4-refactor
@joe-niland can you please re-run the tests? |
/test all |
@joe-niland can you please re-run the tests again 🙏 ❔ |
/test all |
Looks like the error on the terraform-aws-codebuild/examples/vpc/main.tf Lines 13 to 15 in 6c27093
You may need to bump the subnet and vpc modules. I would bump those to the latest and plan it locally to ensure you have the correct inputs set (It will be much faster doing that locally than bumping the version and requesting someone to rerun the tests). |
@nitrocode @joe-niland I made the changes and tested them locally this time, and the plan was successful: |
/test all |
@nitrocode, @joe-niland it passed all the tests now. I'll be waiting for feedback or for a merge. |
@nitrocode when can we expect a review/merge on this? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me.
cc: @joe-niland @Nuru in case there is anything I missed. I'll leave the merging up to you folks.
@joe-niland @Nuru when can we expect a review/merge on this? |
@nitrocode @joe-niland @Nuru when can we expect a review/merge on this? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@denissimonovski I've tried to address your last comment and also pointed out that we can remove the awsutil dependency now, as it has become optional in the s3-bucket module.
source = "hashicorp/aws" | ||
version = ">= 4.9.0" | ||
} | ||
awsutils = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@denissimonovski we can remove this (and the provider and other usage) now, as of this PR. Sorry for the run-around, but we may as well fix this now.
The upgrade is two faceted. | ||
1. Update terraform to a version equal to or newer than version 1.0. You can follow the **[Upgrading to Terraform v1.0](https://www.terraform.io/language/upgrade-guides/1-0)** guide. | ||
2. Update the AWS provider to the latest version. You can follow the S3 bucket modules **[Upgrading to v2.0](https://github.com/cloudposse/terraform-aws-s3-bucket/wiki/Upgrading-to-v2.0)** guide. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@denissimonovski I think it is necessary, because the bucket module and the resource this PR replaces it with had force_destroy
set to true.
Superseded by AWS V5 Upgrade |
Related pr #126 Thanks @denissimonovski @max-lobur and the reviewers for making it possible to migrate to v4 and v5 provider versions |
what
why
references
closes #104