Skip to content

Commit

Permalink
Merge pull request #23232 from hashicorp/b-s3-bucket-lifecycle-config…
Browse files Browse the repository at this point in the history
…uration-empty-fitler

r/s3_bucket_lifecycle_configuration: update value set in state for an empty `filter` argument
  • Loading branch information
anGie44 authored Feb 17, 2022
2 parents b6c1e18 + 481687f commit 5e0ac8d
Show file tree
Hide file tree
Showing 5 changed files with 266 additions and 11 deletions.
3 changes: 3 additions & 0 deletions .changelog/23232.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
```release-note:bug
resource/aws_s3_bucket_lifecycle_configuration: Prevent non-empty plans when `filter` is an empty configuration block
```
61 changes: 61 additions & 0 deletions internal/service/s3/bucket_lifecycle_configuration_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -603,6 +603,32 @@ func TestAccS3BucketLifecycleConfiguration_TransitionUpdateBetweenDaysAndDate_in
})
}

// Reference: https://github.com/hashicorp/terraform-provider-aws/issues/23228
func TestAccS3BucketLifecycleConfiguration_EmptyFilter_NonCurrentVersions(t *testing.T) {
rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix)
resourceName := "aws_s3_bucket_lifecycle_configuration.test"

resource.ParallelTest(t, resource.TestCase{
PreCheck: func() { acctest.PreCheck(t) },
ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID),
Providers: acctest.Providers,
CheckDestroy: testAccCheckBucketLifecycleConfigurationDestroy,
Steps: []resource.TestStep{
{
Config: testAccBucketLifecycleConfiguration_EmptyFilter_NonCurrentVersionsConfig(rName),
Check: resource.ComposeTestCheckFunc(
testAccCheckBucketLifecycleConfigurationExists(resourceName),
),
},
{
ResourceName: resourceName,
ImportState: true,
ImportStateVerify: true,
},
},
})
}

func testAccCheckBucketLifecycleConfigurationDestroy(s *terraform.State) error {
conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn

Expand Down Expand Up @@ -1111,3 +1137,38 @@ resource "aws_s3_bucket_lifecycle_configuration" "test" {
}
`, rName, transitionDate, storageClass)
}

func testAccBucketLifecycleConfiguration_EmptyFilter_NonCurrentVersionsConfig(rName string) string {
return fmt.Sprintf(`
resource "aws_s3_bucket" "test" {
bucket = %[1]q
}
resource "aws_s3_bucket_acl" "test" {
bucket = aws_s3_bucket.test.id
acl = "private"
}
resource "aws_s3_bucket_lifecycle_configuration" "test" {
bucket = aws_s3_bucket.test.bucket
rule {
id = %[1]q
filter {}
noncurrent_version_expiration {
newer_noncurrent_versions = 2
noncurrent_days = 30
}
noncurrent_version_transition {
noncurrent_days = 30
storage_class = "STANDARD_IA"
}
status = "Enabled"
}
}
`, rName)
}
10 changes: 1 addition & 9 deletions internal/service/s3/flex.go
Original file line number Diff line number Diff line change
Expand Up @@ -886,14 +886,6 @@ func FlattenLifecycleRuleFilter(filter *s3.LifecycleRuleFilter) []interface{} {
return nil
}

if filter.And == nil &&
filter.ObjectSizeGreaterThan == nil &&
filter.ObjectSizeLessThan == nil &&
(filter.Prefix == nil || aws.StringValue(filter.Prefix) == "") &&
filter.Tag == nil {
return nil
}

m := make(map[string]interface{})

if filter.And != nil {
Expand All @@ -908,7 +900,7 @@ func FlattenLifecycleRuleFilter(filter *s3.LifecycleRuleFilter) []interface{} {
m["object_size_less_than"] = int(aws.Int64Value(filter.ObjectSizeLessThan))
}

if filter.Prefix != nil && aws.StringValue(filter.Prefix) != "" {
if filter.Prefix != nil {
m["prefix"] = aws.StringValue(filter.Prefix)
}

Expand Down
199 changes: 199 additions & 0 deletions website/docs/guides/version-4-upgrade.html.md
Original file line number Diff line number Diff line change
Expand Up @@ -467,6 +467,199 @@ your Terraform state and will henceforth be managed by Terraform.

Switch your Terraform configuration to the [`aws_s3_bucket_lifecycle_configuration` resource](/docs/providers/aws/r/s3_bucket_lifecycle_configuration.html) instead.

#### For Lifecycle Rules with no `prefix` previously configured

For example, given this previous configuration:

```terraform
resource "aws_s3_bucket" "example" {
bucket = "my-example-bucket"
lifecycle_rule {
id = "Keep previous version 30 days, then in Glacier another 60"
enabled = true
noncurrent_version_transition {
days = 30
storage_class = "GLACIER"
}
noncurrent_version_expiration {
days = 90
}
}
lifecycle_rule {
id = "Delete old incomplete multi-part uploads"
enabled = true
abort_incomplete_multipart_upload_days = 7
}
}
```

It will receive the following error after upgrading:

```
│ Error: Value for unconfigurable attribute
│ with aws_s3_bucket.example,
│ on main.tf line 1, in resource "aws_s3_bucket" "example":
│ 1: resource "aws_s3_bucket" "example" {
│ Can't configure a value for "lifecycle_rule": its value will be decided automatically based on the result of applying this configuration.
```

Since the `lifecycle_rule` argument changed to read-only, the recommendation is to update the configuration to use the `aws_s3_bucket_lifecycle_configuration`
resource and remove any references to `lifecycle_rule` and its nested arguments in the `aws_s3_bucket` resource.

~> **Note:** When configuring the `rule.filter` configuration block in the new `aws_s3_bucket_lifecycle_configuration` resource, it is recommended to use the AWS CLI s3api [get-bucket-lifecycle-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-lifecycle-configuration.html)
to fetch the source bucket's lifecycle configuration and determine if the `Filter` is configured as `"Filter" : {}` or `"Filter" : { "Prefix": "" }`.
If the former is returned, `rule.filter` should be configured as `filter {}`. If the latter is returned, `rule.filter` should be configured as follows.

```terraform
resource "aws_s3_bucket" "example" {
# ... other configuration ...
}
resource "aws_s3_bucket_lifecycle_configuration" "example" {
bucket = aws_s3_bucket.example.id
rule {
id = "Keep previous version 30 days, then in Glacier another 60"
status = "Enabled"
filter {
prefix = ""
}
noncurrent_version_transition {
noncurrent_days = 30
storage_class = "GLACIER"
}
noncurrent_version_expiration {
noncurrent_days = 90
}
}
rule {
id = "Delete old incomplete multi-part uploads"
status = "Enabled"
filter {
prefix = ""
}
abort_incomplete_multipart_upload {
days_after_initiation = 7
}
}
}
```

It is then recommended running `terraform import` on each new resource to prevent data loss, e.g.

```shell
$ terraform import aws_s3_bucket_lifecycle_configuration.example example
aws_s3_bucket_lifecycle_configuration.example: Importing from ID "example"...
aws_s3_bucket_lifecycle_configuration.example: Import prepared!
Prepared aws_s3_bucket_lifecycle_configuration for import
aws_s3_bucket_lifecycle_configuration.example: Refreshing state... [id=example]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
```

#### For Lifecycle Rules with `prefix` previously configured as an empty string

For example, given this configuration:

```terraform
resource "aws_s3_bucket" "example" {
bucket = "my-example-bucket"
lifecycle_rule {
id = "log-expiration"
enabled = true
prefix = ""
transition {
days = 30
storage_class = "STANDARD_IA"
}
transition {
days = 180
storage_class = "GLACIER"
}
}
}
```

It will receive the following error after upgrading:

```
│ Error: Value for unconfigurable attribute
│ with aws_s3_bucket.example,
│ on main.tf line 1, in resource "aws_s3_bucket" "example":
│ 1: resource "aws_s3_bucket" "example" {
│ Can't configure a value for "lifecycle_rule": its value will be decided automatically based on the result of applying this configuration.
```

Since the `lifecycle_rule` argument changed to read-only, the recommendation is to update the configuration to use the `aws_s3_bucket_lifecycle_configuration`
resource and remove any references to `lifecycle_rule` and its nested arguments in the `aws_s3_bucket` resource:

```terraform
resource "aws_s3_bucket" "example" {
# ... other configuration ...
}
resource "aws_s3_bucket_lifecycle_configuration" "example" {
bucket = aws_s3_bucket.example.id
rule {
id = "log-expiration"
status = "Enabled"
filter {
prefix = ""
}
transition {
days = 30
storage_class = "STANDARD_IA"
}
transition {
days = 180
storage_class = "GLACIER"
}
}
}
```

It is then recommended running `terraform import` on each new resource to prevent data loss, e.g.

```shell
$ terraform import aws_s3_bucket_lifecycle_configuration.example example
aws_s3_bucket_lifecycle_configuration.example: Importing from ID "example"...
aws_s3_bucket_lifecycle_configuration.example: Import prepared!
Prepared aws_s3_bucket_lifecycle_configuration for import
aws_s3_bucket_lifecycle_configuration.example: Refreshing state... [id=example]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
```

#### For Lifecycle Rules with a `prefix`


For example, given this previous configuration:

```terraform
Expand All @@ -476,18 +669,22 @@ resource "aws_s3_bucket" "example" {
id = "log"
enabled = true
prefix = "log/"
tags = {
rule = "log"
autoclean = "true"
}
transition {
days = 30
storage_class = "STANDARD_IA"
}
transition {
days = 60
storage_class = "GLACIER"
}
expiration {
days = 90
}
Expand All @@ -497,6 +694,7 @@ resource "aws_s3_bucket" "example" {
id = "tmp"
prefix = "tmp/"
enabled = true
expiration {
date = "2022-12-31"
}
Expand Down Expand Up @@ -534,6 +732,7 @@ resource "aws_s3_bucket_lifecycle_configuration" "example" {
filter {
and {
prefix = "log/"
tags = {
rule = "log"
autoclean = "true"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -151,11 +151,11 @@ The `rule` configuration block supports the following arguments:

* `abort_incomplete_multipart_upload` - (Optional) Configuration block that specifies the days since the initiation of an incomplete multipart upload that Amazon S3 will wait before permanently removing all parts of the upload [documented below](#abort_incomplete_multipart_upload).
* `expiration` - (Optional) Configuration block that specifies the expiration for the lifecycle of the object in the form of date, days and, whether the object has a delete marker [documented below](#expiration).
* `filter` - (Optional) Configuration block used to identify objects that a Lifecycle Rule applies to [documented below](#filter).
* `filter` - (Optional) Configuration block used to identify objects that a Lifecycle Rule applies to [documented below](#filter). If not specified, the `rule` will default to using `prefix`.
* `id` - (Required) Unique identifier for the rule. The value cannot be longer than 255 characters.
* `noncurrent_version_expiration` - (Optional) Configuration block that specifies when noncurrent object versions expire [documented below](#noncurrent_version_expiration).
* `noncurrent_version_transition` - (Optional) Set of configuration blocks that specify the transition rule for the lifecycle rule that describes when noncurrent objects transition to a specific storage class [documented below](#noncurrent_version_transition).
* `prefix` - (Optional) **DEPRECATED** Use `filter` instead. This has been deprecated by Amazon S3. Prefix identifying one or more objects to which the rule applies.
* `prefix` - (Optional) **DEPRECATED** Use `filter` instead. This has been deprecated by Amazon S3. Prefix identifying one or more objects to which the rule applies. Defaults to an empty string (`""`) if `filter` is not specified.
* `status` - (Required) Whether the rule is currently being applied. Valid values: `Enabled` or `Disabled`.
* `transition` - (Optional) Set of configuration blocks that specify when an Amazon S3 object transitions to a specified storage class [documented below](#transition).

Expand Down

0 comments on commit 5e0ac8d

Please sign in to comment.