-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Able to start pipelinerun from pipelinerun spec without providing value for workspace and the pipelinerun stays is Running(Started)
state
#3089
Comments
Hey @VeereshAradhya we recently added a feature where you can specify a default that is controller wide (https://github.com/tektoncd/pipeline/blob/master/docs/workspaces.md#using-workspaces-in-tasks re (I do wonder @sbwsg @jerop if it would make sense to make it more obvious when the default is being used, e.g. require "defualt" is provided for a workspace vs just allowing it) @VeereshAradhya you might get more info about what's going on if you look at the pod, e.g. |
Hm. This is odd. I definitely see a problem in the syntax of the PipelineRun:
Here the PipelineRun is "binding" a Workspace with name
Our code does check the number of volume configurations in the binding so I am a bit surprised this isn't erroring out here. Agree with @bobcatfish that it would be useful to see the pod that was created here. It would also be useful to know the contents of the default configmap (
I'd like to discuss this a bit more before forming an opinion on it. It seems, at least initially to me, to be a bit counter to the purpose of the feature to require it be opt-in at the Task/TaskRun level. Sneaky edit to add: the Optional Workspaces TEP calls out that any Workspace marked optional will not receive the default taskrun workspace. That's mentioned in this section here. |
@sbwsg @bobcatfish I think I have not written the proper heading for the issue. The issue is I am able to create a
|
Got it, thanks for clarifying - that'll teach me to rush through reading the original issue >.< This does indeed appear to be a bug in the validation of the PipelineRun. I'll work on reproducing my side and adding a fix. |
It appears that there are two discrete bugs here. The first is that we're not validating the PipelineRun workspaces. The second is that somehow the PipelineRun reconciler is submitting an invalid TaskRun spec but then not putting the PipelineRun into a failed state when the TaskRun fails due to workspace validation error. A fix for the PipelineRun validation is here: #3096 |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
@sbwsg @VeereshAradhya should we close this one ? |
sgtm /close |
/close |
@sbwsg: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Expected Behavior
Pipelinerun should fail with some validation error
Actual Behavior
Pipelinerun starts and stays in
Running(Started)
stateSteps to Reproduce the Problem
kubectl apply -f
Additional Info
Kubernetes version:
Output of
kubectl version
:Tekton Pipeline version:
Output of
tkn version
orkubectl get pods -n tekton-pipelines -l app=tekton-pipelines-controller -o=jsonpath='{.items[0].metadata.labels.version}'
Command logs
Spec files
The text was updated successfully, but these errors were encountered: