Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for policy_name field in Placement Policy #8475

Merged
merged 2 commits into from
Aug 1, 2023

Conversation

morshielt
Copy link
Contributor

Custom Placement Policy is a GKE node pool feature that allows users to use a custom resource placement policy.

If this PR is for Terraform, I acknowledge that I have:

  • Searched through the issue tracker for an open issue that this either resolves or contributes to, commented on it to claim it, and written "fixes {url}" or "part of {url}" in this PR description. If there were no relevant open issues, I opened one and commented that I would like to work on it (not necessary for very small changes).
  • Ensured that all new fields I added that can be set by a user appear in at least one example (for generated resources) or third_party test (for handwritten resources or update tests).
  • Generated Terraform providers, and ran make test and make lint in the generated providers to ensure it passes unit and linter tests.
  • Ran relevant acceptance tests using my own Google Cloud project and credentials (If the acceptance tests do not yet pass or you are unable to run them, please let your reviewer know).
  • Read Write release notes before writing my release note below.

Release Note Template for Downstream PRs (will be copied)

container: added `placement_policy.policy_name` field to `google_container_node_pool` resource

Note: I've run the added acceptance tests successfully, but the make testacc TEST=./google TESTARGS='-run=TestAccContainerNodePool' is partially failing. I'm running into some issues related to my project, e.g. INVALID_ARGUMENT: disabling pod cidr overprovision is not allowed for this project or Error: Error waiting for creating GKE cluster: The network "default" does not have available private IP space in 10.0.0.0/8 to reserve a /14 block for containers for cluster....

@modular-magician
Copy link
Collaborator

Hello! I am a robot. It looks like you are a community contributor. @slevenick, a repository maintainer, has been assigned to review your changes. If you have not received review feedback within 2 business days, please leave a comment on this PR asking them to take a look.

You can help make sure that review is quick by doing a self-review and by running impacted tests locally.

@kawych
Copy link
Member

kawych commented Jul 28, 2023

LGTM

@modular-magician
Copy link
Collaborator

Hi there, I'm the Modular magician. I've detected the following information about your changes:

Diff report

Your PR generated some diffs in downstreams - here they are.

Terraform GA: Diff ( 4 files changed, 135 insertions(+), 2 deletions(-))
Terraform Beta: Diff ( 4 files changed, 133 insertions(+))
TF Conversion: Diff ( 2 files changed, 3 insertions(+), 3 deletions(-))

@modular-magician
Copy link
Collaborator

Tests analytics

Total tests: 2892
Passed tests 2587
Skipped tests: 302
Affected tests: 3

Action taken

Found 3 affected test(s) by replaying old test recordings. Starting RECORDING based on the most recent commit. Click here to see the affected tests
TestAccContainerAwsNodePool_BetaBasicHandWritten|TestAccContainerCluster_customPlacementPolicy|TestAccContainerNodePool_customPlacementPolicy

Get to know how VCR tests work

@modular-magician
Copy link
Collaborator

$\textcolor{green}{\textsf{Tests passed during RECORDING mode:}}$
TestAccContainerCluster_customPlacementPolicy[Debug log]
TestAccContainerNodePool_customPlacementPolicy[Debug log]

Rerun these tests in REPLAYING mode to catch issues

$\textcolor{green}{\textsf{No issues found for passed tests after REPLAYING rerun.}}$


$\textcolor{red}{\textsf{Tests failed during RECORDING mode:}}$
TestAccContainerAwsNodePool_BetaBasicHandWritten[Error message] [Debug log]

$\textcolor{red}{\textsf{Please fix these to complete your PR.}}$
View the build log or the debug log for each test

Copy link
Contributor

@slevenick slevenick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good overall and tests pass. Need to test update support

@@ -262,6 +262,10 @@ cluster.
Specifying COMPACT placement policy type places node pool's nodes in a closer
physical proximity in order to reduce network latency between nodes.

* `policy_name` - (Optional) If set, refers to the name of a custom resource policy supplied by the user.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should also be documented in the container_cluster markdown I believe? Though I'm not seeing the whole placement_policy object in there, maybe it was missed when it was added

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The placement_policy is a node pool setting - it can be passed during cluster creation as documented here, but it only impacts the default created node pool. Should it be included in container_cluster markdown then?

@morshielt morshielt force-pushed the placement_policy_name branch from 38be41f to ac799f4 Compare July 31, 2023 11:51
@morshielt
Copy link
Contributor Author

Thank you for the review, PTAL :)

@modular-magician
Copy link
Collaborator

Hi there, I'm the Modular magician. I've detected the following information about your changes:

Diff report

Your PR generated some diffs in downstreams - here they are.

Terraform GA: Diff ( 4 files changed, 136 insertions(+), 2 deletions(-))
Terraform Beta: Diff ( 4 files changed, 134 insertions(+))
TF Conversion: Diff ( 2 files changed, 3 insertions(+), 3 deletions(-))

@modular-magician
Copy link
Collaborator

Tests analytics

Total tests: 2911
Passed tests 2607
Skipped tests: 302
Affected tests: 2

Action taken

Found 2 affected test(s) by replaying old test recordings. Starting RECORDING based on the most recent commit. Click here to see the affected tests
TestAccSpannerInstanceIamPolicy|TestAccContainerAwsNodePool_BetaBasicHandWritten

Get to know how VCR tests work

@modular-magician
Copy link
Collaborator

$\textcolor{green}{\textsf{Tests passed during RECORDING mode:}}$
TestAccSpannerInstanceIamPolicy[Debug log]

Rerun these tests in REPLAYING mode to catch issues

$\textcolor{green}{\textsf{No issues found for passed tests after REPLAYING rerun.}}$


$\textcolor{red}{\textsf{Tests failed during RECORDING mode:}}$
TestAccContainerAwsNodePool_BetaBasicHandWritten[Error message] [Debug log]

$\textcolor{red}{\textsf{Please fix these to complete your PR.}}$
View the build log or the debug log for each test

@slevenick
Copy link
Contributor

/gcbrun

I'm seeing an odd failure, maybe retrying will fix it

@modular-magician
Copy link
Collaborator

Hi there, I'm the Modular magician. I've detected the following information about your changes:

Diff report

Your PR generated some diffs in downstreams - here they are.

Terraform GA: Diff ( 4 files changed, 136 insertions(+), 2 deletions(-))
Terraform Beta: Diff ( 4 files changed, 134 insertions(+))
TF Conversion: Diff ( 2 files changed, 3 insertions(+), 3 deletions(-))

@modular-magician
Copy link
Collaborator

Tests analytics

Total tests: 2911
Passed tests 2608
Skipped tests: 302
Affected tests: 1

Action taken

Found 1 affected test(s) by replaying old test recordings. Starting RECORDING based on the most recent commit. Click here to see the affected tests
TestAccContainerAwsNodePool_BetaBasicHandWritten

Get to know how VCR tests work

@modular-magician
Copy link
Collaborator

$\textcolor{red}{\textsf{Tests failed during RECORDING mode:}}$
TestAccContainerAwsNodePool_BetaBasicHandWritten[Error message] [Debug log]

$\textcolor{red}{\textsf{Please fix these to complete your PR.}}$
View the build log or the debug log for each test

Copy link
Contributor

@slevenick slevenick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not seeing this in the v1 API at https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.zones.clusters.nodePools

Is this feature available in GA, and maybe the docs aren't updated, or is it only in beta right now?

If it's only beta all this code will need to be inside the <% unless version == 'ga' -%> tags

@morshielt
Copy link
Contributor Author

I'm not seeing this in the v1 API at https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.zones.clusters.nodePools

Is this feature available in GA, and maybe the docs aren't updated, or is it only in beta right now?

If it's only beta all this code will need to be inside the <% unless version == 'ga' -%> tags

The feature is available in GA, it actually is listed in the v1 API: https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters.nodePools#NodePool.PlacementPolicy :)
Also here's a past PR graduating it to GA: #7039

@slevenick
Copy link
Contributor

I'm not seeing this in the v1 API at https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.zones.clusters.nodePools
Is this feature available in GA, and maybe the docs aren't updated, or is it only in beta right now?
If it's only beta all this code will need to be inside the <% unless version == 'ga' -%> tags

The feature is available in GA, it actually is listed in the v1 API: https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters.nodePools#NodePool.PlacementPolicy :) Also here's a past PR graduating it to GA: #7039

Oh, strange it didn't show up on the overview page. Thanks for checking!

ron-gal pushed a commit to ron-gal/magic-modules that referenced this pull request Aug 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants