-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add volumeClaimTemplate as a Workspace volume source #2326
Add volumeClaimTemplate as a Workspace volume source #2326
Conversation
Hi @jlpettersson. Thanks for your PR. I'm waiting for a tektoncd member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test /hold Thanks very much for the PR, it's exciting to see this come together! I'm adding the hold just so that we can raise this with the Working Group on Wednesday and share it more widely for any feedback. |
The following is the coverage report on pkg/.
|
There were a few lint errors, which is why
The integration tests look like they timed out. Re-running to check if this reproduces. /test pull-tekton-pipeline-integration-tests |
The following is the coverage report on pkg/.
|
/test pull-tekton-pipeline-build-tests |
The following is the coverage report on pkg/.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you very much for adding this! I've added some comments after a quick read-through. I plan to check out this PR locally and give it a try over the next few days as well.
It would be useful to include some examples in the examples/v1beta1/taskruns
and examples/v1beta1/pipelineruns
directories. These YAML files will be executed as part of our E2E suite and are a good way to regression-test behaviour. And it functions as useful documentation too!
It would also be great to introduce explicit E2E testing to our test
directory that checks scenarios like:
- what happens when multiple tasks in a single pipeline share a claim + workspace binding name + owner name. (But only if this is a situation that can actually happen ofcourse)
- how a PipelineRun / TaskRun fails when a volume claim template cannot be created.
|
||
// getPersistentVolumeClaimName gets the name of PersistentVolumeClaim for a Workspace and PipelineRun or TaskRun. claim | ||
// must be a PersistentVolumeClaim from set's VolumeClaims template. | ||
func GetPersistentVolumeClaimName(claim *corev1.PersistentVolumeClaim, wb v1alpha1.WorkspaceBinding, owner metav1.OwnerReference) string { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this guaranteed to be unique? I'm trying to figure out the possible combinations and I have a hunch that there could be two Tasks in a Pipeline that share a PVC Claim Name, WorkspaceBinding Name, and PipelineRun (owner) Name. In this case would there be potential for collision? And is that problematic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for reviewing! There are two cases. I tried to explain them in my PR description under "How it works". When the volumeClaimTemplate is declared for a PipelineRun (owner) - it will be seen as a "PersistentVolumeClaim"-workspace for all TaskRuns in the Pipeline. The function taskWorkspaceByWorkspaceVolumeSource()
in pipelinerun.go
applies the template and pass it as PersistentVolumeClaim to the TaskRuns - this happens after the PVC is created from the template - after the IsBeforeFirstTaskRun()
-check. I can explain this more in person if you want.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PipelineRun has a unique name, and will be owner.Name
here. There can also be multiple workspaces per PipelineRun but they must have unique names. Hence the PVC get an unique name here.
The following is the coverage report on pkg/.
|
/retest |
I think the integration is failing because of #2337 at the moment. This is something I'd like to fix for the next point release, which is supposed to happen tomorrow. Will try to get this resolved asap. |
/hold cancel Thanks for presenting to the WG! Am going to work on fixing these integration tests and then will get back to this review. |
The following is the coverage report on pkg/.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good, only few small comments (on naming)
`PersistentVolumeClaim` volumes are a good choice for sharing data among `Tasks` within a `Pipeline`. | ||
|
||
#### `volumeClaimTemplate` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason to name it volumeClaimTemplate
and not persistentVolumeClaim
?
(Mainy reason for this question is if we should be consistent with the persistentVolumeClaim
field)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I borrowed the whole syntax and idea with volumeClaimTemplate
from StatefulSet https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#components and wanted to be consistent with that terminology. The persistentVolumeClaim
is still there for workspaces and can be used if you want to use an existing PVC.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough. I don't have a strong opinion on this, although I would tend to prefer be consistent in the naming with our own API than k8s one 😛
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree with the persistence part, makes lots of sense...
|
||
// HasVolumeClaimTemplate returns true if TaskRun contains volumeClaimTemplates that is | ||
// used for creating PersistentVolumeClaims with an OwnerReference for each run | ||
func (tr *TaskRun) HasVolumeClaimTemplate() bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't those changes be also on v1beta1
? (even though this is the one used by the controller)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! I should add it to v1beta1
as well, yes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added the the changes I made on v1alpha1
to v1beta1
and also squashed my commits. Thanks for reviewing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Btw, not sure where to ask for this, anyone can help with how would you use it in a CLI with tkn to pass the name of the template which already exists in the system (not template file)? e.g.:
I have tried this but it continuously says cannot find the PVC (NOTE: the "default-netapp-disk" exists in OpenShift as a VolumeClaimTemplate and it is the default one as well):
_tkn pipeline start pipeline-name -w name=shared-data,claimName=VolumeClaimTemplate,storageClass=default-netapp-disk,accessModes=ReadWriteOnce,storage=1Gi,volumeMode=Filesystem ....
OUTPUT:
message: >-
pod status "PodScheduled":"False"; message: "0/12 nodes are available:
12 persistentvolumeclaim "VolumeClaimTemplate" not found. preemption:
0/12 nodes are available: 12 Preemption is not helpful for scheduling."
I've also tried using 'claimName=default-netapp-disk' as well, and many other things though nothing works when it comes to sending the VCT and it's details through the tkn CLI...
This is very critical to me and I did not really found any useful info during this last week while working on it...
The following is the coverage report on pkg/.
|
An existing PersistentVolumeClaim can currently be used as a Workspace volume source. There is two ways of using an existing PVC as volume: - Reuse an existing PVC - Create a new PVC before each PipelineRun. There is disadvantages by reusing the same PVC for every PipelineRun: - You need to clean the PVC at the end of the Pipeline - All Tasks using the workspace will be scheduled to the node where the PV is bound - Concurrent PipelineRuns may interfere, an artifact or file from one PipelineRun may slip in to or be used in another PipelineRun, with very few audit tracks. There is also disadvantages by creating a new PVC before each PipelineRun: - This can not (easily) be done declaratively - This is hard to do programmatically, because it is hard to know when to delete the PVC. The PipelineRun can not be set as OwnerReference since the PVC must be created first This commit adds 'volumeClaimTemplate' as a volume source for workspaces. This has several advantages: - The syntax is used in k8s StatefulSet and other k8s projects so it is familiar in the kubernetes ecosystem - It is possible to declaratively declare that a PVC should be created for each PipelineRun, e.g. from a TriggerTemplate. - The user can choose storageClass (or omit to get the cluster default) to e.g. get a faster SSD volume, or to get a volume compatible with e.g. Windows. - The user can adapt the size to the job, e.g. use 5Gi for apps that contains machine learning models, or 1Gi for microservice apps. It can be changed on demand in a configuration that lives in the users namespace e.g. in a TriggerTemplate. - The size affects the storage quota that is set on the namespace and it may affect billing and cost depending on the cluster environment. - The PipelineRun or TaskRun with the template is created first, and is used as OwnerReference on the PVC. That means that the PVC will have the same lifecycle as the PipelineRun. Related to tektoncd#1986 See also: - tektoncd#2174 - tektoncd#2218 - tektoncd/triggers#476 - tektoncd/triggers#482 - kubeflow/kfp-tekton#51
e7cdea6
to
c7ce3da
Compare
The following is the coverage report on pkg/.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/meow
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sbwsg The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Tried everything out locally and it worked like a charm! |
/test pull-tekton-pipeline-integration-tests |
Thanks for adding this @jlpettersson !! I'm really excited to try it out :D |
@jlpettersson any chance you'd be up for adding a bit to the docs about how to use the template, e.g. what the syntax is? even just a brief example would be great! lemme know if not and ill either do it or open an issue :) |
@bobcatfish yes, I could improve the docs. I'll have a look tomorrow. In addition to the example in the PR-description, there is one example in v1beta1 pipelineruns: https://github.com/tektoncd/pipeline/blob/master/examples/v1beta1/pipelineruns/workspace-from-volumeclaimtemplate.yaml |
Add a short example and link to a full example of using volumeClaimTemplate as a volume source in a workspace. Requested in comment to PR tektoncd#2326 (comment) that fixes tektoncd#1986
Add a short example and link to a full example of using volumeClaimTemplate as a volume source in a workspace. Requested in comment to PR #2326 (comment) that fixes #1986
Jonas has recently become a regularly contributor. He started with adding a minor [_missing_ `omitempty`](tektoncd/pipeline#2301) and then [proposed some ideas](tektoncd/pipeline#1986 (comment)) around workspaces and PersistentVolumeClaim creation and continued to [elaborate around those ideas](tektoncd/pipeline#1986 (comment)). A sunny day a few days later, he also submitted an [extensive implementation for volumeClaimTemplate](tektoncd/pipeline#2326), corresponding to the idea discussions. A few days later submitted a [small refactoring PR](tektoncd/pipeline#2392), and he also listened to community members that [proposed changes](tektoncd/pipeline#2450) to his implementation about volumeClaimTemplates and did an [implementation for that proposal](tektoncd/pipeline#2453). A rainy day, he also wrote [technical documentation about PVCs](tektoncd/pipeline#2521) including adding an example that caused _flaky_ integration tests for the whole community during multiple days. When he understood his mistake, he submitted a [removal of the example](tektoncd/pipeline#2546) that caused flaky tests. He has also put his toe into Tekton Catalog and [contributed to the buildah task](tektoncd/pipeline#2546). This has followed, mostly with more PRs to the Pipeline project: - tektoncd/pipeline#2460 - tektoncd/pipeline#2491 - tektoncd/pipeline#2502 - tektoncd/pipeline#2506 - tektoncd/pipeline#2632 - tektoncd/pipeline#2633 - tektoncd/pipeline#2634 - tektoncd/pipeline#2636 - tektoncd/pipeline#2601 - tektoncd/pipeline#2630 Jonas is excited about the great community around Tekton and the project! He now would like to join the org.
Jonas has recently become a regularly contributor. He started with adding a minor [_missing_ `omitempty`](tektoncd/pipeline#2301) and then [proposed some ideas](tektoncd/pipeline#1986 (comment)) around workspaces and PersistentVolumeClaim creation and continued to [elaborate around those ideas](tektoncd/pipeline#1986 (comment)). A sunny day a few days later, he also submitted an [extensive implementation for volumeClaimTemplate](tektoncd/pipeline#2326), corresponding to the idea discussions. A few days later submitted a [small refactoring PR](tektoncd/pipeline#2392), and he also listened to community members that [proposed changes](tektoncd/pipeline#2450) to his implementation about volumeClaimTemplates and did an [implementation for that proposal](tektoncd/pipeline#2453). A rainy day, he also wrote [technical documentation about PVCs](tektoncd/pipeline#2521) including adding an example that caused _flaky_ integration tests for the whole community during multiple days. When he understood his mistake, he submitted a [removal of the example](tektoncd/pipeline#2546) that caused flaky tests. He has also put his toe into Tekton Catalog and [contributed to the buildah task](tektoncd/pipeline#2546). This has followed, mostly with more PRs to the Pipeline project: - tektoncd/pipeline#2460 - tektoncd/pipeline#2491 - tektoncd/pipeline#2502 - tektoncd/pipeline#2506 - tektoncd/pipeline#2632 - tektoncd/pipeline#2633 - tektoncd/pipeline#2634 - tektoncd/pipeline#2636 - tektoncd/pipeline#2601 - tektoncd/pipeline#2630 Jonas is excited about the great community around Tekton and the project! He now would like to join the org.
404 Page not found... |
Changes
An existing PersistentVolumeClaim can currently be used as a Workspace
volume source. There is two ways of using an existing PVC as volume:
There is disadvantages by reusing the same PVC for every PipelineRun:
the PV is bound
PipelineRun may slip in to or be used in another PipelineRun, with
very few audit tracks.
There is also disadvantages by manually creating a new PVC before each PipelineRun:
to delete the PVC. The PipelineRun can not be set as OwnerReference since
the PVC must be created first
This commit adds 'volumeClaimTemplate' as a volume source for workspaces. This
has several advantages:
familiar in the kubernetes ecosystem
PipelineRun, e.g. from a TriggerTemplate.
get a faster SSD volume, or to get a volume compatible with e.g. Windows.
machine learning models, or 1Gi for microservice apps. It can be changed on
demand in a configuration that lives in the users namespace e.g. in a
TriggerTemplate.
billing and cost depending on the cluster environment.
as OwnerReference on the PVC. That means that the PVC will have the same lifecycle
as the PipelineRun.
How it works
When using a
volumeClaimTemplate
workspace on aPipelineRun
:TaskRun
is created; a PVC from the template is created - with the PipelineRun asOwnerReference
.volumeClaimTemplate
workspace is translated to aPersistentVolumeClaim
workspace when creatingTaskRun
s.PipelineRun
is deleted, the PVC is also deleted.When using
volumeClaimTemplate
workspace on aTaskRun
not initiated from a PipelineRun:Pod
is created; a PVC from the template is created - with the TaskRun asOwnerReference
.volumeClaimTemplate
workspace is translated to aPersistentVolumeClaim
workspace when creating thePod
.TaskRun
is deleted, the PVC is also deleted.Example usage:
Related to #1986
See also:
Submitter Checklist
These are the criteria that every PR should meet, please check them off as you
review them:
Reviewer Notes
If API changes are included, additive changes must be approved by at least two OWNERS and backwards incompatible changes must be approved by more than 50% of the OWNERS, and they must first be added in a backwards compatible way.
Release Notes