-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider security groups with source security groups when hashing #2376
Consider security groups with source security groups when hashing #2376
Conversation
Previously they would conflict you had multiple security group rules with the same ingress or egress ports but different source security groups because only the CIDR blocks were considered (which are empty when using source security groups). Updated to include migrations (from [email protected]) Signed-off-by: Clint Shryock <[email protected]>
This will clean up #2294. It will migrate people when possible, but there may still exist scenarios (like in 2294) where some manual cleanup will be necessary. The reason being that in the config given there, there are 2 rules created, but without considering the source group id in the hash, the state file will mistakenly have the same source id for both rules (clearly not the correct case). |
cc @phinze for some review |
I should add this; remotely (on AWS) the rules are created correctly. They're simply stored locally in the state file incorrectly. This PR will migrate most cases ( I believe ) and prevent that in the future. Users with these conflicts can hit "Duplicate" errors. Manual clean up is needed, unless we swallow that error in the |
return *b[i].GroupName < *b[j].GroupName | ||
} | ||
|
||
panic("mismatched security group rules, may be a terraform bug") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can a user get to this panic by specifying a config improperly? Or are we protected by earlier validations from that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah let's just return false here, which will just sort last anything that's wonky looking, allowing later code to fail in a likely more meaningful way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd worry that just returning false might mask an error (though theoretically that line should never be hit). If we don't want to panic, I'd suggest at least sorting the bad data in a deterministic way (otherwise you could end up in a semi-permadiff situation).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a user of terraform, I'd personally prefer that it blow up than silently swallow this sort of error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing to keep in mind (a positive thing) is that Terraform will recover panics that pop out of resource calls (as long as they're not from a goroutine within that resource call) and still handle cleanup and partial state.
@catsby Sorry false alarm! I forgot to update the symlink to point to |
* master: (23 commits) typo Update CHANGELOG.md provider/aws: Add docs for autoscaling_policy + cloudwatch_metric_alarm provider/aws: Add autoscaling_policy provider/aws: Add cloudwatch_metric_alarm Update CHANGELOG.md Update CHANGELOG.md provider/template: don't error when rendering fails in Exists Update CHANGELOG.md Added Azure SQL server and service support. Update CHANGELOG.md docs: clarify wording around destroy/apply args Getting Started: Added a Next Step upon finishing install. docs: add description of archive format to download page docs: snapshot plugin dependencies when releasing add v0.5.3 transitory deps Fixes support for changing just the read / write capacity of a GSI Change sleep time for DynamoDB table waits from 3 seconds to 5 seconds Remove request for attribute changes Fix AWS SDK imports ...
* master: provider/azure: Fix SQL client name to match upstream
Thanks for checking it out @zxjinn , much appreciated. Sorry for the mess 😄 |
return fmt.Errorf(`[WARN] A duplicate Security Group rule was found. This may be | ||
a side effect of a now-fixed Terraform issue causing two security groups with | ||
identical attributes but different source_security_group_ids to overwrite each | ||
other in the state. See https://github.com/hashicorp/teraform/pull/2376 for more |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/teraform/terraform/ - probably my typo from earlier 😛
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IT'S ALL YOUR FAULT
LGTM |
…ng-with-source-sg Consider security groups with source security groups when hashing
👏 thanks @catsby ! |
@jszwedko wrong mention 😉 |
Oops! @samprakos ^^ |
I don't believe I'd be able to reproduce it on demand...I fixed the 18 errors by manually deleting the rules from aws and re-running Terraform plan/apply. It's almost as though the security group ids in aws all of a sudden didn't align with the ones in the tfstate file. |
@samprakos that is probably different from this issue then. |
I recently opened #3019 to do further refactoring of Security Group rules. If you're interested in trying it out, I would recommend a test setup, as there's a schema migration and I don't know how easy that would be to roll back. |
Is there a work-around for this issue in the meantime? I get the "duplicate Security Group rule" error when trying to apply the following resources: resource "aws_security_group_rule" "all-master-to-master" {
type = "ingress"
security_group_id = "${aws_security_group.masters-k8s.id}"
self = true
from_port = 0
to_port = 0
protocol = "-1"
}
resource "aws_security_group_rule" "all-master-to-node" {
type = "ingress"
security_group_id = "${aws_security_group.nodes-k8s.id}"
source_security_group_id = "${aws_security_group.masters-k8s.id}"
from_port = 0
to_port = 0
protocol = "-1"
} |
Still getting this on terraform 0.7.5 >.< Configuration was built and first deployed using terraform 0.7.5 too, so no legacy stuff - something is clearly still broken |
Ah I tracked it down - in my case I was rewiring two aws_subnets, subnet A, and subnet B, and swapping their CIDR blocks. On some terraform apply's it would say it couldnt create the subnet because the CIDR already existed (one bug - terraform isnt intelligent enough to handle this situation) - on another run it complained:
Which would be the case because the subnets are swapping around and order is important. In such cases like these the only appropriate thing would be to delete all subnets first, and all SG first and then recreate. Its more advanced logic than terraform currently has, but this would cater for this suite of problems |
I am facing same error where i already have SG with specific CIDR block but when my terrafrom aplly command gives same duplicate error and quits during terraform apply. I can't delete existing rule because it was already present and PRD data communication is happening and i need functionality to just ignore if it is already present and i used lifecycle {ignore_changed = "subnetname"} that not helped me so why it can't just ignore if security group CIDR already present. |
Is there a way to migrate from inline security group rules to |
@genevieve better late than never, but: you can migrate, kind of.
It's quite tedious if you have a lot of rules. At least it's easy to tell if you're done: you can plan as many times as you want while moving stuff around, and when the plan is empty, you know you've got it right. Also |
Hi @korfuri! I'm currently attempting to use For example, when using
Sorry if I'm just missing something obvious, but where can I find the correct addresses for these inline configs? Thanks in advance! |
I think I was confused in my response above, and that's not possible. |
YES that worked!!! Have a good one! |
We're seeing this issue ( We've commented out all of our Steps to reproduce (let's call the security group
This issue should be revisited as 1) it's still broken and 2) the URL provided in the error message points the user to this thread for a resolution, for which one does not clearly exist. |
Following up on the message above... We had the following lifecycle in our security group rule configs:
It seems obvious now, but removing this lifecycle config avoids the duplicate warning. We initially had this lifecycle policy in place as we ran into an issue where a bad I'm not sure what the resolution is here, or if this issue belongs in this thread or a new one should be opened. We want to continue using the lifecycle for the reason above, but it also prevents us from easily managing our SG rules. |
similar issue as the one posted by @alkalinecoffee ; we have configured "aws_security_groups" and "aws_security_group_rule" to work for external rule declaration; however on every update on the cider block we have a new security-group being spun up - one of the w/a opined was to write a lifecycle hook |
Using terraform 0.11.8 and 0.11.10, both give me this error. I am not using create_before_destroy. All rules are separate (not inline) resources. This seems to occur for me every time I use TF to push a change to any security group rule. I also attempted to use timestamp() in rule descriptions; hoping that this would force attributes to be unique for each application of each rule - but it still reports this error. Which means that the description attribute is NOT part of the hash. I tried deleting all security group rules, (both manually, and using terraform apply) - and that did not fix this problem. I tried using terraform state rm for the security group rule resources in state - but that didn't fix the problem, because terraform will no longer update those rules, so the state will never get refreshed with the rule state. The text of the error message makes no sense, because when it's applying to a "source security group id" - it then says that the duplicate rule is for a cidr range, not a source id. (you can not specify a source security group id, when you're specifying a cidr range). The only eventual workaround was to delete all of the existing rules by hand in aws console, then re-apply the new terraform rules. (though I suppose this could probably be automated with aws cli). edit: workaround with terraform taint: . . . actually took a while to figure out because the taint syntax is different than all other terraform syntax. For some reason, ONE of my rules can't be found by taint; so I continue to get one "duplicate security group errors". |
@ndp-stc's suggestion above seemed to be a viable workaround for me, but I needed to tweak the method very slightly. Maybe the taint syntax changed slightly or my nested modules complicate issues, .. not sure? This seemed to give me all the terraform commands I needed to fix my problem:
|
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Supersedes #2318
Fixes #2294
Original message:
Previously they would conflict you had multiple security group rules
with the same ingress or egress ports but different source security
groups because only the CIDR blocks were considered (which are empty
when using source security groups).
Updates:
Remediation: If you encounter an error regarding duplicate rules, you will need to identify and manually remove the offending/missing security group rules from the Security Group, and re-plan and apply with Terraform.