-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Design Concern - PipelineRuns #2372
Comments
Thanks for raising this concern. I agree it's a bit odd, and likely stems from the idea that most Run configs would be created from YAML in a git repo, or bound with variables populated from trigger configs. Rather than define another CRD and require users to create and manage a new resource, if they want this behavior without a git repo or trigger, it can be simulated with ConfigMaps backing env vars today. If that's not suitable (it doesn't cover every case) we could possibly support this more first-class, but we'd need to see user feedback and do some design work. |
I think triggers are great, but I think it would be a mistake to be completely reliant on them. In the event that Github's webhook service goes down, or an issue crops up with that specific communication channel, there should be a clear path for people to continue to manage their builds. Having to connect to the CI/CD cluster and issue a build by manually creating a k8 resource from a YAML file works, but I think we could certainly do better than that. When you look at the idea of having both YAML files per repo as well as Triggers, we end up having to duplicate configuration. At scale, this could be a pretty big pain to manage and maintain.
I would be interested in hearing more about this. We have 30-40 repositories that will use roughly the same workspace binding specifications. If we need to make adjustments to our workspace strategy, ideally we wouldn't need to adjust each of the triggers along with every yaml file spanning each of our repos. For reference, I started thinking more about this when I discovered that there was no ability to bind workspaces/resources when creating PipelineRun's in the Tekton Dashboard. |
Also maybe relevant to your interests: tektoncd/triggers#200 (being able to use TriggerTemplates outside of triggers) PipelineResources were also intended to be re-usable groupings of configuration - and in a very early iteration of Tekton we had a type called "PipelineParams" which contained reusable bindings. Is it possible to share some examples of the kinds of bindings you are defining and re-using @davissp14 ? I'm wondering about the use cases that are leading to these values being parameterized but also so regularly being the same between Runs. |
The majority of our projects and pipelines would follow more or less the following workspace binding specification:
Assuming |
I tend to agree with this. A "RunConfiguration" or something similar may also be a place to put "garbage collection" configuration that I elaborated about in #2332 But in the case of But I think this is mostly useful in a
Does this mean that the dashboard is creating PipelineRun/TaskRun with a request directly to the ApiServer instead of doing the request via a Trigger that has a |
Yeah, it looks that way.
Also, correct me if I'm off base here, but Triggers seem analogous to defining service API endpoints. It makes a lot of sense for handling incoming events from external sources, but requiring all builds to be initiated this way doesn't seem quite right. If the runtime based configurations were defined separately from these resources, it seems like it would provide greater flexibility in how configuration is managed, as well as how builds are initiated. E.G. Triggers, Dashboards, Yaml files, etc. would all initiate a build the same way, by simply creating a PipelineRun resource that targets a pipeline and runtime configuration. |
I must admit that I know very little about the dashboard, yet. I will try it out in the week. But I wonder how authentication, authorization and respect of namespaces is handled here. In a web application I would expect that is handled by OIDC. But I am falling a bit off-topic here.
Yes, I agree with that. I see Triggers as the "entrypoint" for the PipelineRuns/TaskRuns that the namespace owner has defined. I also expect authentication and authorization to happen here, e.g. validating the request. A request from another namespace may be authorized with a ServiceAccount/OIDC token, as well as OIDC tokens from webapps. Authorization could be done Rego, tektoncd/triggers#484 Especially since Runs allocates resources (cpu, memory and storage) that belongs to (is billed to?) by the namespace owner.
This one is harder to tell for me. If I have a CI-pipeline, I imagine that I can send a request/cloud event to the local trigger or a trigger in another cluster or namespace to initiate a CD-pipeline. But exploratory work by members of the namespace could start things imperative using cli, yaml or other ways.
I feel that is the case now? Each new PipelineRun/TaskRun can e.g. specify the volume source they want for the workspace, emptyDir, existing PVC or an PVC created just for the run. But I have not been involved with Tekton so long. There are members here that know for what reasons we have this design better than me :) |
Just like the cli, a user can have pipeline installed and not triggers, thus the dashboard (and the cli), need to be able to create |
Yeah, I would agree with that. Making Triggers a hard dependency for users who are not hooking into external service events doesn't make a lot of sense to me.
Well, yes and no. As far as I can tell there are 3 different ways to trigger a build: Triggers - Which currently embeds a PipelineRun Resource. We currently have 3 different ways to create PipelineRun resources, which means we have 3 different places where we need to manage our bindings/configuration specifications. Ideally, we would be able to define our runtime configuration once and not have to worry about keeping bindings/configuration consistent across the various execution paths. Does that make sense? |
I agree with most of that. I would say Manual and API is the same, both is via ApiServer, but imperative or declarative. My view is that a Run is either initiated via: ApiServer - need cluster authentication + RBAC. This can be done both imperative or declarative. The dashboard is not the only webapp. Also OpenShift Pipelines has a web-gui. Also, larger organizations may build their custom webapp for integrating with tekton. |
Rotten issues close after 30d of inactivity. /close Send feedback to tektoncd/plumbing. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
@tekton-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Expected Behavior
PipelineRun Resources should not be responsible for defining configuration/bindings.
By definition, PipelineRun/TaskRun's are not re-useable resources. That being said, it seems strange that we are defining our bindings here. I would expect these definitions to live in a "RunConfiguration" resource or something that would provide re-useable configuration and binding definitions across runs.
When we think about moving this functionality into a UI, it seems like an additional level of abstraction is needed. Otherwise, users would be expected to correctly redefine their bindings on every manually issued run.
Actual Behavior
Right now, configuration/bindings are required to happen on the PipelineRun/TaskRun. These requirements make it pretty difficult to expose manually issued builds to a Dashboard in a sane way.
The text was updated successfully, but these errors were encountered: