-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
task has failed: more than one PersistentVolumeClaim is bound #3480
Comments
Could these two workspaces use the same PersistentVolumeClaim? I see one of them is using a |
The requirement for the Task itself is to save the snapshot data under |
I think your use case is interesting, but also challenging. In this case, you need to disable the affinity assistant, and you also need to make sure that your PVCs are in the same Availability Zone, how this can be done depends on your storage system. Is there a way to set Zones for your PVs ? Or at least, enforce so that all Pods that use those PVCs are always scheduled to the same zone? The VolumeBindingMode for your StorageClass may also affect if this is possible or how to do this. Especially the field For some storage systems and volume binding mode, the volume is first scheduled to a Zone, then the Pod follow to that zone - as how I have understood it - at least with |
@jlpettersson I want to thank you for pointing out affinity assistant after disabling it on the initial cluster where I started testing everything worked. The backup storage that I use is zone independent and does not pose any problem as it allows RWX access mode. I'm closing this as it's not really a bug but lack of understanding on my side. |
because trying to mount 2 PVCs results in an error: tektoncd/pipeline#3480 Now we have another problem because the subpath for the source is not automatically cleaned up. The pipeline fails with: "remote origin already exists"
because trying to mount 2 PVCs results in an error: tektoncd/pipeline#3480 Since the PVC is re-used, it means we now have to remove the old code otherwise the git clone will fail. We do that by adding a new "cleanup" task.
because trying to mount 2 PVCs results in an error: tektoncd/pipeline#3480 Since the PVC is re-used, it means we now have to remove the old code otherwise the git clone will fail. We do that by adding a new "cleanup" task.
because trying to mount 2 PVCs results in an error: tektoncd/pipeline#3480 Since the PVC is re-used, it means we now have to remove the old code otherwise the git clone will fail. We do that by adding a new "cleanup" task.
because trying to mount 2 PVCs results in an error: tektoncd/pipeline#3480 Since the PVC is re-used, it means we now have to remove the old code otherwise the git clone will fail. We do that by adding a new "cleanup" task.
…kton reports such an issue: tektoncd/pipeline#3480 Signed-off-by: Charles Moulliard <[email protected]>
Expected Behavior
the task to execute from pipelineRun as it executes from taskRun
Actual Behavior
task backup-prometheus-snapshot has failed: more than one PersistentVolumeClaim is bound
pod for taskrun prometheus-snapshot-run-8cdnx-backup-prometheus-snapshot-fpcgx not available yet
Tasks Completed: 2 (Failed: 1, Cancelled 0), Skipped: 0
Steps to Reproduce the Problem
Additional Info
Kubernetes version: v1.18.3
Tekton Pipeline version:
Client version: 0.12.1
Pipeline version: v0.17.2
Triggers version: unknown
The text was updated successfully, but these errors were encountered: