-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TEP-0135: Coscheduling PipelineRun pods Implementation #6740
Comments
/assign |
Part of [tektoncd#6740][tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. Before this commit, the PipelineRun reconciler creates PVC for each `VolumeClaimTemplate` backed workspace, and mount the PVCs to the AA to avoid PV availability zone conflict. This implementation works for `AffinityAssistantPerWorkspace` but introduces availability zone conflict issue in the `AffinityAssistantPerPipelineRun` mode since we cannot enforce all the PVC are created in the same availability zone. Instead of directly creating a PVC for each PipelineRun workspace backed by a VolumeClaimTemplate, this commit sets one VolumeClaimTemplate per PVC workspace in the affinity assistant StatefulSet spec, which enforces all VolumeClaimTemplates in StatefulSets are all provisioned on the same node/availability zone. This commit just refactors the current implementation in favor of the `AffinityAssistantPerPipelineRun` feature. There is no functionality change. The `AffinityAssistantPerPipelineRun` feature will be added in the follow up PRs. [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Part of [tektoncd#6740][tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. Before this commit, the PipelineRun reconciler creates PVC for each `VolumeClaimTemplate` backed workspace, and mount the PVCs to the AA to avoid PV availability zone conflict. This implementation works for `AffinityAssistantPerWorkspace` but introduces availability zone conflict issue in the `AffinityAssistantPerPipelineRun` mode since we cannot enforce all the PVC are created in the same availability zone. Instead of directly creating a PVC for each PipelineRun workspace backed by a VolumeClaimTemplate, this commit sets one VolumeClaimTemplate per PVC workspace in the affinity assistant StatefulSet spec, which enforces all VolumeClaimTemplates in StatefulSets are all provisioned on the same node/availability zone. This commit just refactors the current implementation in favor of the `AffinityAssistantPerPipelineRun` feature. There is no functionality change. The `AffinityAssistantPerPipelineRun` feature will be added in the follow up PRs. [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
@QuanZhang-William is this a duplicate of #6543? |
Part of [tektoncd#6740][tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. Before this commit, the PipelineRun reconciler creates PVC for each `VolumeClaimTemplate` backed workspace, and mount the PVCs to the AA to avoid PV availability zone conflict. This implementation works for `AffinityAssistantPerWorkspace` but introduces availability zone conflict issue in the `AffinityAssistantPerPipelineRun` mode since we cannot enforce all the PVC are created in the same availability zone. Instead of directly creating a PVC for each PipelineRun workspace backed by a VolumeClaimTemplate, this commit sets one VolumeClaimTemplate per PVC workspace in the affinity assistant StatefulSet spec, which enforces all VolumeClaimTemplates in StatefulSets are all provisioned on the same node/availability zone. This commit just refactors the current implementation in favor of the `AffinityAssistantPerPipelineRun` feature. There is no functionality change. The `AffinityAssistantPerPipelineRun` feature will be added in the follow up PRs. [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Part of [tektoncd#6740][tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. Before this commit, the PipelineRun reconciler creates PVC for each `VolumeClaimTemplate` backed workspace, and mount the PVCs to the AA to avoid PV availability zone conflict. This implementation works for `AffinityAssistantPerWorkspace` but introduces availability zone conflict issue in the `AffinityAssistantPerPipelineRun` mode since we cannot enforce all the PVC are created in the same availability zone. Instead of directly creating a PVC for each PipelineRun workspace backed by a VolumeClaimTemplate, this commit sets one VolumeClaimTemplate per PVC workspace in the affinity assistant StatefulSet spec, which enforces all VolumeClaimTemplates in StatefulSets are all provisioned on the same node/availability zone. This commit just refactors the current implementation in favor of the `AffinityAssistantPerPipelineRun` feature. There is no functionality change. The `AffinityAssistantPerPipelineRun` feature will be added in the follow up PRs. [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Not really, I saw some discussions in #6543. I plan to use this issue to track the implementation progress only so that it can be separated from the discussion. |
Part of [tektoncd#6740][tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. Before this commit, the PipelineRun reconciler creates PVC for each `VolumeClaimTemplate` backed workspace, and mount the PVCs to the AA to avoid PV availability zone conflict. This implementation works for `AffinityAssistantPerWorkspace` but introduces availability zone conflict issue in the `AffinityAssistantPerPipelineRun` mode since we cannot enforce all the PVC are created in the same availability zone. Instead of directly creating a PVC for each PipelineRun workspace backed by a VolumeClaimTemplate, this commit sets one VolumeClaimTemplate per PVC workspace in the affinity assistant StatefulSet spec, which enforces all VolumeClaimTemplates in StatefulSets are all provisioned on the same node/availability zone. This commit just refactors the current implementation in favor of the `AffinityAssistantPerPipelineRun` feature. There is no functionality change. The `AffinityAssistantPerPipelineRun` feature will be added in the follow up PRs. [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Part of [tektoncd#6740][tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. Before this commit, the PipelineRun reconciler creates PVC for each `VolumeClaimTemplate` backed workspace, and mount the PVCs to the AA to avoid PV availability zone conflict. This implementation works for `AffinityAssistantPerWorkspace` but introduces availability zone conflict issue in the `AffinityAssistantPerPipelineRun` mode since we cannot enforce all the PVC are created in the same availability zone. Instead of directly creating a PVC for each PipelineRun workspace backed by a VolumeClaimTemplate, this commit sets one VolumeClaimTemplate per PVC workspace in the affinity assistant StatefulSet spec, which enforces all VolumeClaimTemplates in StatefulSets are all provisioned on the same node/availability zone. This commit just refactors the current implementation in favor of the `AffinityAssistantPerPipelineRun` feature. There is no functionality change. The `AffinityAssistantPerPipelineRun` feature will be added in the follow up PRs. [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Part of [tektoncd#6740][tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. Before this commit, the PipelineRun reconciler creates PVC for each `VolumeClaimTemplate` backed workspace, and mount the PVCs to the AA to avoid PV availability zone conflict. This implementation works for `AffinityAssistantPerWorkspace` but introduces availability zone conflict issue in the `AffinityAssistantPerPipelineRun` mode since we cannot enforce all the PVC are created in the same availability zone. Instead of directly creating a PVC for each PipelineRun workspace backed by a VolumeClaimTemplate, this commit sets one VolumeClaimTemplate per PVC workspace in the affinity assistant StatefulSet spec, which enforces all VolumeClaimTemplates in StatefulSets are all provisioned on the same node/availability zone. This commit just refactors the current implementation in favor of the `AffinityAssistantPerPipelineRun` feature. There is no functionality change. The `AffinityAssistantPerPipelineRun` feature will be added in the follow up PRs. [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Part of [tektoncd#6740][tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. Before this commit, the PipelineRun reconciler creates PVC for each `VolumeClaimTemplate` backed workspace, and mount the PVCs to the AA to avoid PV availability zone conflict. This implementation works for `AffinityAssistantPerWorkspace` but introduces availability zone conflict issue in the `AffinityAssistantPerPipelineRun` mode since we cannot enforce all the PVC are created in the same availability zone. Instead of directly creating a PVC for each PipelineRun workspace backed by a VolumeClaimTemplate, this commit sets one VolumeClaimTemplate per PVC workspace in the affinity assistant StatefulSet spec, which enforces all VolumeClaimTemplates in StatefulSets are all provisioned on the same node/availability zone. This commit just refactors the current implementation in favor of the `AffinityAssistantPerPipelineRun` feature. There is no functionality change. The `AffinityAssistantPerPipelineRun` feature will be added in the follow up PRs. [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Part of [tektoncd#6740][tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. Before this commit, the PipelineRun reconciler creates PVC for each `VolumeClaimTemplate` backed workspace, and mount the PVCs to the AA to avoid PV availability zone conflict. This implementation works for `AffinityAssistantPerWorkspace` but introduces availability zone conflict issue in the `AffinityAssistantPerPipelineRun` mode since we cannot enforce all the PVC are created in the same availability zone. Instead of directly creating a PVC for each PipelineRun workspace backed by a VolumeClaimTemplate, this commit sets one VolumeClaimTemplate per PVC workspace in the affinity assistant StatefulSet spec, which enforces all VolumeClaimTemplates in StatefulSets are all provisioned on the same node/availability zone. This commit just refactors the current implementation in favor of the `AffinityAssistantPerPipelineRun` feature. There is no functionality change. The `AffinityAssistantPerPipelineRun` feature will be added in the follow up PRs. [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Part of [tektoncd#6740][tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. Before this commit, the PipelineRun reconciler creates PVC for each `VolumeClaimTemplate` backed workspace, and mount the PVCs to the AA to avoid PV availability zone conflict. This implementation works for `AffinityAssistantPerWorkspace` but introduces availability zone conflict issue in the `AffinityAssistantPerPipelineRun` mode since we cannot enforce all the PVC are created in the same availability zone. Instead of directly creating a PVC for each PipelineRun workspace backed by a VolumeClaimTemplate, this commit sets one VolumeClaimTemplate per PVC workspace in the affinity assistant StatefulSet spec, which enforces all VolumeClaimTemplates in StatefulSets are all provisioned on the same node/availability zone. This commit just refactors the current implementation in favor of the `AffinityAssistantPerPipelineRun` feature. There is no functionality change. The `AffinityAssistantPerPipelineRun` feature will be added in the follow up PRs. [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Part of [tektoncd#6740][tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. This commit introduces a new feature flag `coscheduling` which works together with the `disable-affinity-assistant` feature flag to determine the Affinity Assistant behavior. The usage of the new feature flag will be added in the follow-up PRs. The details of the `coscheduling` feature flag can be found in the [Configuration][configuration] section of TEP-0135. The details of the `disable-affinity-assistant` feature flag can be found in the [Upgrade and Migration Strategy][strategy] section of TEP-0135. NOTE: this feature is WIP, please do not use on this feature. /kind feature [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md [configuration]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md#configuration [strategy]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md#configuration
Part of [tektoncd#6740][tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. This commit introduces a new feature flag `coscheduling` which works together with the `disable-affinity-assistant` feature flag to determine the Affinity Assistant behavior. The usage of the new feature flag will be added in the follow-up PRs. The details of the `coscheduling` feature flag can be found in the [Configuration][configuration] section of TEP-0135. The details of the `disable-affinity-assistant` feature flag can be found in the [Upgrade and Migration Strategy][strategy] section of TEP-0135. NOTE: this feature is WIP, please do not use on this feature. /kind feature [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md [configuration]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md#configuration [strategy]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md#configuration
Part of [#6740][#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. Before this commit, the PipelineRun reconciler creates PVC for each `VolumeClaimTemplate` backed workspace, and mount the PVCs to the AA to avoid PV availability zone conflict. This implementation works for `AffinityAssistantPerWorkspace` but introduces availability zone conflict issue in the `AffinityAssistantPerPipelineRun` mode since we cannot enforce all the PVC are created in the same availability zone. Instead of directly creating a PVC for each PipelineRun workspace backed by a VolumeClaimTemplate, this commit sets one VolumeClaimTemplate per PVC workspace in the affinity assistant StatefulSet spec, which enforces all VolumeClaimTemplates in StatefulSets are all provisioned on the same node/availability zone. This commit just refactors the current implementation in favor of the `AffinityAssistantPerPipelineRun` feature. There is no functionality change. The `AffinityAssistantPerPipelineRun` feature will be added in the follow up PRs. [#6740]: #6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Part of [tektoncd#6740][tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. This commit implements the `coschedule-pipelineruns` scheduling mode, where all the `pods` of a `PipelineRun` are scheduled to the same node. This commit renames the current `createOrUpdateAffinityAssistants` function to `createOrUpdateAffinityAssistantsPerWorkspace`, and adds a new function `createOrUpdateAffinityAssistantsPerPipelineRun` for the `coschedule-pipelineruns` scheduling mode (with some refactoring). There is no functionality change of the existing `createOrUpdateAffinityAssistants` function. The `createOrUpdateAffinityAssistantsPerPipelineRun` function is implemented, but not used. The usage of the `createOrUpdateAffinityAssistantsPerPipelineRun` function will be added in the followup PRs. /kind feature [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Part of [tektoncd#6740][tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. This commit implements the `coschedule-pipelineruns` scheduling mode, where all the `pods` of a `PipelineRun` are scheduled to the same node. This commit renames the current `createOrUpdateAffinityAssistants` function to `createOrUpdateAffinityAssistantsPerWorkspace`, and adds a new function `createOrUpdateAffinityAssistantsPerPipelineRun` for the `coschedule-pipelineruns` scheduling mode (with some refactoring). There is no functionality change of the existing `createOrUpdateAffinityAssistants` function. The `createOrUpdateAffinityAssistantsPerPipelineRun` function is implemented, but not used. The usage of the `createOrUpdateAffinityAssistantsPerPipelineRun` function will be added in the followup PRs. /kind feature [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Part of [tektoncd#6740][tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. This commit implements the `coschedule-pipelineruns` scheduling mode, where all the `pods` of a `PipelineRun` are scheduled to the same node. This commit renames the current `createOrUpdateAffinityAssistants` function to `createOrUpdateAffinityAssistantsPerWorkspace`, and adds a new function `createOrUpdateAffinityAssistantsPerPipelineRun` for the `coschedule-pipelineruns` scheduling mode (with some refactoring). There is no functionality change of the existing `createOrUpdateAffinityAssistants` function. The `createOrUpdateAffinityAssistantsPerPipelineRun` function is implemented, but not used. The usage of the `createOrUpdateAffinityAssistantsPerPipelineRun` function will be added in the followup PRs. /kind feature [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Part of [tektoncd#6740][tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. This commit implements the `coschedule-pipelineruns` scheduling mode, where all the `pods` of a `PipelineRun` are scheduled to the same node. This commit renames the current `createOrUpdateAffinityAssistants` function to `createOrUpdateAffinityAssistantsPerWorkspace`, and adds a new function `createOrUpdateAffinityAssistantsPerPipelineRun` for the `coschedule-pipelineruns` scheduling mode (with some refactoring). There is no functionality change of the existing `createOrUpdateAffinityAssistants` function. The `createOrUpdateAffinityAssistantsPerPipelineRun` function is implemented, but not used. The usage of the `createOrUpdateAffinityAssistantsPerPipelineRun` function will be added in the followup PRs. /kind feature [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Part of [tektoncd#6740][tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. This commit implements the `coschedule-pipelineruns` scheduling mode, where all the `pods` of a `PipelineRun` are scheduled to the same node. This commit renames the current `createOrUpdateAffinityAssistants` function to `createOrUpdateAffinityAssistantsPerWorkspace`, and adds a new function `createOrUpdateAffinityAssistantsPerPipelineRun` for the `coschedule-pipelineruns` scheduling mode (with some refactoring). There is no functionality change of the existing `createOrUpdateAffinityAssistants` function. The `createOrUpdateAffinityAssistantsPerPipelineRun` function is implemented, but not used. The usage of the `createOrUpdateAffinityAssistantsPerPipelineRun` function will be added in the followup PRs. /kind feature [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Part of [tektoncd#6740][tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. This commit implements the `coschedule-pipelineruns` scheduling mode, where all the `pods` of a `PipelineRun` are scheduled to the same node. This commit renames the current `createOrUpdateAffinityAssistants` function to `createOrUpdateAffinityAssistantsPerWorkspace`, and adds a new function `createOrUpdateAffinityAssistantsPerPipelineRun` for the `coschedule-pipelineruns` scheduling mode (with some refactoring). There is no functionality change of the existing `createOrUpdateAffinityAssistants` function. The `createOrUpdateAffinityAssistantsPerPipelineRun` function is implemented, but not used. The usage of the `createOrUpdateAffinityAssistantsPerPipelineRun` function will be added in the followup PRs. /kind feature [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Part of [tektoncd#6740][tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. This commit introduces a new feature flag `coscheduling` which works together with the `disable-affinity-assistant` feature flag to determine the Affinity Assistant behavior. The usage of the new feature flag will be added in the follow-up PRs. The details of the `coscheduling` feature flag can be found in the [Configuration][configuration] section of TEP-0135. The details of the `disable-affinity-assistant` feature flag can be found in the [Upgrade and Migration Strategy][strategy] section of TEP-0135. NOTE: this feature is WIP, please do not use on this feature. /kind feature [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md [configuration]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md#configuration [strategy]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md#configuration
Part of [tektoncd#6740][tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. This commit introduces a new feature flag `coscheduling` which works together with the `disable-affinity-assistant` feature flag to determine the Affinity Assistant behavior. The usage of the new feature flag will be added in the follow-up PRs. The details of the `coscheduling` feature flag can be found in the [Configuration][configuration] section of TEP-0135. The details of the `disable-affinity-assistant` feature flag can be found in the [Upgrade and Migration Strategy][strategy] section of TEP-0135. NOTE: this feature is WIP, please do not use on this feature. /kind feature [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md [configuration]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md#configuration [strategy]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md#configuration
Part of [tektoncd#6740][tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. This commit introduces a new feature flag `coscheduling` which works together with the `disable-affinity-assistant` feature flag to determine the Affinity Assistant behavior. The usage of the new feature flag will be added in the follow-up PRs. The details of the `coscheduling` feature flag can be found in the [Configuration][configuration] section of TEP-0135. The details of the `disable-affinity-assistant` feature flag can be found in the [Upgrade and Migration Strategy][strategy] section of TEP-0135. NOTE: this feature is WIP, please do not use on this feature. /kind feature [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md [configuration]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md#configuration [strategy]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md#configuration
Part of [tektoncd#6740][tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. This commit introduces a new feature flag `coscheduling` which works together with the `disable-affinity-assistant` feature flag to determine the Affinity Assistant behavior. The usage of the new feature flag will be added in the follow-up PRs. The details of the `coscheduling` feature flag can be found in the [Configuration][configuration] section of TEP-0135. The details of the `disable-affinity-assistant` feature flag can be found in the [Upgrade and Migration Strategy][strategy] section of TEP-0135. NOTE: this feature is WIP, please do not use on this feature. /kind feature [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md [configuration]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md#configuration [strategy]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md#configuration
Part of [tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. This commit consumes the functions added in [tektoncd#6819] and implements end to end support of `Coschedule:PipelineRuns` coschedule mode, where all the `PipelineRun pods` are scheduled to the same node. /kind feature [tektoncd#6819]: tektoncd#6819 [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Part of [tektoncd#6740][tektoncd#6740], developed based on Priti's [prototype][prototype] and partially completes the PVC deletion behavior [discussion][discussion]. Prior to this commit, the `PVCs` created from `PipelineRun's` `VolumeClaimTemplate` are not auto deleted when the owning `PipelineRun` is completed. This commit updates the `cleanupAffinityAssistantsAndPVCs` function to remove the `kubernetes.io/pvc-protection` finalizer protection (so that the pvc is allowed to be deleted while the pod consuming it is not deleted). The function then explicitly delete such `PVC` when cleaning up the `Affinity Assistants` at pr completion time. This change is NOT applied to `coschedule: workspaces` mode as there is backward compatability concern. See more details in this [discussion][discussion] [tektoncd#6740]: tektoncd#6740 [prototype]: tektoncd#6635 [discussion]: tektoncd#6741 (comment)
Part of [tektoncd#6740][tektoncd#6740], developed based on Priti's [prototype][prototype] and partially completes the PVC deletion behavior [discussion][discussion]. Prior to this commit, the `PVCs` created from `PipelineRun's` `VolumeClaimTemplate` are not auto deleted when the owning `PipelineRun` is completed. This commit updates the `cleanupAffinityAssistantsAndPVCs` function to remove the `kubernetes.io/pvc-protection` finalizer protection (so that the pvc is allowed to be deleted while the pod consuming it is not deleted). The function then explicitly delete such `PVC` when cleaning up the `Affinity Assistants` at pr completion time. This change is NOT applied to `coschedule: workspaces` mode as there is backward compatability concern. See more details in this [discussion][discussion] [tektoncd#6740]: tektoncd#6740 [prototype]: tektoncd#6635 [discussion]: tektoncd#6741 (comment)
Part of [tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. This commit implements `coschedule: isolate-pipelinerun` coschedule mode by modifying [PodAntiAffinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) terms in the `Affinity Assistant StatefulSets`, which enforces only 1 pipelinerun is running in a node at the same time. /kind feature [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Part of [#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. This commit implements `coschedule: isolate-pipelinerun` coschedule mode by modifying [PodAntiAffinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) terms in the `Affinity Assistant StatefulSets`, which enforces only 1 pipelinerun is running in a node at the same time. /kind feature [#6740]: #6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Part of [tektoncd#6740][tektoncd#6740], developed based on Priti's [prototype][prototype] and partially completes the PVC deletion behavior [discussion][discussion]. Prior to this commit, the `PVCs` created from `PipelineRun's` `VolumeClaimTemplate` are not auto deleted when the owning `PipelineRun` is completed. This commit updates the `cleanupAffinityAssistantsAndPVCs` function to remove the `kubernetes.io/pvc-protection` finalizer protection (so that the pvc is allowed to be deleted while the pod consuming it is not deleted). The function then explicitly delete such `PVC` when cleaning up the `Affinity Assistants` at pr completion time. This change is NOT applied to `coschedule: workspaces` mode as there is backward compatability concern. See more details in this [discussion][discussion] [tektoncd#6740]: tektoncd#6740 [prototype]: tektoncd#6635 [discussion]: tektoncd#6741 (comment)
Part of [tektoncd#6740][tektoncd#6740], developed based on Priti's [prototype][prototype] and partially completes the PVC deletion behavior [discussion][discussion]. Prior to this commit, the `PVCs` created from `PipelineRun's` `VolumeClaimTemplate` are not auto deleted when the owning `PipelineRun` is completed. This commit updates the `cleanupAffinityAssistantsAndPVCs` function to remove the `kubernetes.io/pvc-protection` finalizer protection (so that the pvc is allowed to be deleted while the pod consuming it is not deleted). The function then explicitly delete such `PVC` when cleaning up the `Affinity Assistants` at pr completion time. This change is NOT applied to `coschedule: workspaces` mode as there is backward compatability concern. See more details in this [discussion][discussion] [tektoncd#6740]: tektoncd#6740 [prototype]: tektoncd#6635 [discussion]: tektoncd#6741 (comment)
Part of [#6740][#6740], developed based on Priti's [prototype][prototype] and partially completes the PVC deletion behavior [discussion][discussion]. Prior to this commit, the `PVCs` created from `PipelineRun's` `VolumeClaimTemplate` are not auto deleted when the owning `PipelineRun` is completed. This commit updates the `cleanupAffinityAssistantsAndPVCs` function to remove the `kubernetes.io/pvc-protection` finalizer protection (so that the pvc is allowed to be deleted while the pod consuming it is not deleted). The function then explicitly delete such `PVC` when cleaning up the `Affinity Assistants` at pr completion time. This change is NOT applied to `coschedule: workspaces` mode as there is backward compatability concern. See more details in this [discussion][discussion] [#6740]: #6740 [prototype]: #6635 [discussion]: #6741 (comment)
Part of [tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. This commit consumes the functions added in [tektoncd#6819] to implement end to end support of `Coschedule:PipelineRuns` where all the `PipelineRun pods` are scheduled to the same node, and the `Coschedule:isolate-pipelinerun` coschedule modes where only 1 PipelineRun is allowed to run in a node at the same time. /kind feature [tektoncd#6819]: tektoncd#6819 [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Part of [tektoncd#6740][tektoncd#6740]. This commit updates the Affinity Assistant related documentation to reflect recent changes in the Affinity Assistant modes. This commit also adds a chart summarizing how to configure the Affinity Assistant modes using `coschedule` and `disabled-affinity-assistant` feature flags. /kind documentation [tektoncd#6740]: tektoncd#6740
Part of [tektoncd#6740][tektoncd#6740]. This commit updates the Affinity Assistant related documentation to reflect recent changes in the Affinity Assistant modes. This commit also adds a chart summarizing how to configure the Affinity Assistant modes using `coschedule` and `disabled-affinity-assistant` feature flags. /kind documentation [tektoncd#6740]: tektoncd#6740
Part of [tektoncd#6740][tektoncd#6740]. This commit updates the Affinity Assistant related documentation to reflect recent changes in the Affinity Assistant modes. This commit also adds a chart summarizing how to configure the Affinity Assistant modes using `coschedule` and `disabled-affinity-assistant` feature flags. /kind documentation [tektoncd#6740]: tektoncd#6740
Part of [tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. This commit consumes the functions added in [tektoncd#6819] to implement end to end support of `Coschedule:PipelineRuns` where all the `PipelineRun pods` are scheduled to the same node, and the `Coschedule:isolate-pipelinerun` coschedule modes where only 1 PipelineRun is allowed to run in a node at the same time. /kind feature [tektoncd#6819]: tektoncd#6819 [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Part of [tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. This commit consumes the functions added in [tektoncd#6819] to implement end to end support of `Coschedule:PipelineRuns` where all the `PipelineRun pods` are scheduled to the same node, and the `Coschedule:isolate-pipelinerun` coschedule modes where only 1 PipelineRun is allowed to run in a node at the same time. /kind feature [tektoncd#6819]: tektoncd#6819 [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Part of [tektoncd#6740][tektoncd#6740] and closes [tektoncd#6915]. Prior to this commit, the `createOrUpdateAffinityAssistantsAndPVCs` function attempts to create all Affinity Assistant StatefulSets and returns aggregated errors. This could result in time and resource waste when executing a pipelinerun that will fail. This commit updates it to "fail fast" strategy where the function is returned as soon as the first error is encountered. This commit also refactors the original `CreatePVCsForWorkspacesWithoutAffinityAssistant` (renamed to `CreatePVCFromVolumeClaimTemplate`) function and its usages to improve readability since the PVC creation logic is now dependent on `AffinityAssistantBehavior`. /kind feature [tektoncd#6740]: tektoncd#6740 [tektoncd#6915]: tektoncd#6915
Part of [tektoncd#6740][tektoncd#6740] and closes [tektoncd#6915]. Prior to this commit, the `createOrUpdateAffinityAssistantsAndPVCs` function attempts to create all Affinity Assistant StatefulSets and returns aggregated errors. This could result in time and resource waste when executing a pipelinerun that will fail. This commit updates it to "fail fast" strategy where the function is returned as soon as the first error is encountered. This commit also refactors the original `CreatePVCsForWorkspacesWithoutAffinityAssistant` (renamed to `CreatePVCFromVolumeClaimTemplate`) function and its usages to improve readability since the PVC creation logic is now dependent on `AffinityAssistantBehavior`. /kind feature [tektoncd#6740]: tektoncd#6740 [tektoncd#6915]: tektoncd#6915
Part of [#6740][#6740] and closes [#6915]. Prior to this commit, the `createOrUpdateAffinityAssistantsAndPVCs` function attempts to create all Affinity Assistant StatefulSets and returns aggregated errors. This could result in time and resource waste when executing a pipelinerun that will fail. This commit updates it to "fail fast" strategy where the function is returned as soon as the first error is encountered. This commit also refactors the original `CreatePVCsForWorkspacesWithoutAffinityAssistant` (renamed to `CreatePVCFromVolumeClaimTemplate`) function and its usages to improve readability since the PVC creation logic is now dependent on `AffinityAssistantBehavior`. /kind feature [#6740]: #6740 [#6915]: #6915
Part of [tektoncd#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. This commit consumes the functions added in [tektoncd#6819] to implement end to end support of `Coschedule:PipelineRuns` where all the `PipelineRun pods` are scheduled to the same node, and the `Coschedule:isolate-pipelinerun` coschedule modes where only 1 PipelineRun is allowed to run in a node at the same time. /kind feature [tektoncd#6819]: tektoncd#6819 [tektoncd#6740]: tektoncd#6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Part of [#6740]. [TEP-0135][tep-0135] introduces a feature that allows a cluster operator to ensure that all of a PipelineRun's pods are scheduled to the same node. This commit consumes the functions added in [#6819] to implement end to end support of `Coschedule:PipelineRuns` where all the `PipelineRun pods` are scheduled to the same node, and the `Coschedule:isolate-pipelinerun` coschedule modes where only 1 PipelineRun is allowed to run in a node at the same time. /kind feature [#6819]: #6819 [#6740]: #6740 [tep-0135]: https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md
Part of [#6740][#6740]. This commit updates the Affinity Assistant related documentation to reflect recent changes in the Affinity Assistant modes. This commit also adds a chart summarizing how to configure the Affinity Assistant modes using `coschedule` and `disabled-affinity-assistant` feature flags. /kind documentation [#6740]: #6740
TEP-0135 is now implemented 😄 /close |
@QuanZhang-William: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Nominated by @pritidesai, @dibyom, @vdemeester and afrittoli (Thanks!!) I started to focus on Tekton Pipeline development since March 2023, and have been involved in several projects: - [TEP-0135: Coschedule PipelineRun Pods](https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md) - [TEP-0115: Catalog git-based versioning] (https://github.com/tektoncd/community/blob/main/teps/0115-tekton-catalog-git-based-versioning.md) - [TEP-0133: Configure default resolver] (https://github.com/tektoncd/community/blob/main/teps/0133-configure-default-resolver.md) Here is the list of my PRs: - https://github.com/tektoncd/pipeline/pulls/QuanZhang-William Here is the list of PRs that I've reviewed: - https://github.com/tektoncd/pipeline/pulls?q=is%3Aopen+is%3Apr+reviewed-by%3AQuanZhang-William++-author%3AQuanZhang-William In-depth knowledge of the specific area demonstrated in Affinity Assistant and the Coschedule PipelineRun Pods feature: - tektoncd/pipeline#6740
Nominated by @pritidesai, @dibyom, @vdemeester and afrittoli (Thanks!!) I started to focus on Tekton Pipeline development since March 2023, and have been involved in several projects: - [TEP-0135: Coschedule PipelineRun Pods](https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md) - [TEP-0115: Catalog git-based versioning] (https://github.com/tektoncd/community/blob/main/teps/0115-tekton-catalog-git-based-versioning.md) - [TEP-0133: Configure default resolver] (https://github.com/tektoncd/community/blob/main/teps/0133-configure-default-resolver.md) Here is the list of my PRs: - https://github.com/tektoncd/pipeline/pulls/QuanZhang-William Here is the list of PRs that I've reviewed: - https://github.com/tektoncd/pipeline/pulls?q=is%3Aopen+is%3Apr+reviewed-by%3AQuanZhang-William++-author%3AQuanZhang-William In-depth knowledge of the specific area demonstrated in Affinity Assistant and the Coschedule PipelineRun Pods feature: - tektoncd/pipeline#6740
Nominated by @pritidesai, @dibyom, @vdemeester and afrittoli (Thanks!!) I started to focus on Tekton Pipeline development since March 2023, and have been involved in several projects: - [TEP-0135: Coschedule PipelineRun Pods](https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md) - [TEP-0115: Catalog git-based versioning](https://github.com/tektoncd/community/blob/main/teps/0115-tekton-catalog-git-based-versioning.md) - [TEP-0133: Configure default resolver](https://github.com/tektoncd/community/blob/main/teps/0133-configure-default-resolver.md) Here is the list of my PRs: - https://github.com/tektoncd/pipeline/pulls/QuanZhang-William Here is the list of PRs that I've reviewed: - https://github.com/tektoncd/pipeline/pulls?q=is%3Aopen+is%3Apr+reviewed-by%3AQuanZhang-William++-author%3AQuanZhang-William In-depth knowledge of the specific area demonstrated in Affinity Assistant and the Coschedule PipelineRun Pods feature: - tektoncd/pipeline#6740
Nominated by @pritidesai, @dibyom, @vdemeester and afrittoli (Thanks!!) I started to focus on Tekton Pipeline development since March 2023, and have been involved in several projects: - [TEP-0135: Coschedule PipelineRun Pods](https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md) - [TEP-0115: Catalog git-based versioning](https://github.com/tektoncd/community/blob/main/teps/0115-tekton-catalog-git-based-versioning.md) - [TEP-0133: Configure default resolver](https://github.com/tektoncd/community/blob/main/teps/0133-configure-default-resolver.md) Here is the list of my PRs: - https://github.com/tektoncd/pipeline/pulls/QuanZhang-William Here is the list of PRs that I've reviewed: - https://github.com/tektoncd/pipeline/pulls?q=is%3Aopen+is%3Apr+reviewed-by%3AQuanZhang-William++-author%3AQuanZhang-William In-depth knowledge of the specific area demonstrated in Affinity Assistant and the Coschedule PipelineRun Pods feature: - tektoncd/pipeline#6740
In TEP-0135, we introduced a feature which allows a cluster operator to ensure that all of a PipelineRun's pods will be scheduled to the same node.
This issue tracks the implementation for this TEP.
The implementation PRs:
Related issues:
CreatePVCsForWorkspacesWithoutAffinityAssistant
#6915Misc:
The text was updated successfully, but these errors were encountered: