Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support controller leader election in tekton-pipeline #2735

Closed
xiujuan95 opened this issue Jun 3, 2020 · 57 comments
Closed

Support controller leader election in tekton-pipeline #2735

xiujuan95 opened this issue Jun 3, 2020 · 57 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@xiujuan95
Copy link
Contributor

Expected Behavior

I want to support high availability (HA) for tekton-pipeline. Not sure whether tekton-pipeline has supported or not, now?

I mean HA is like leader-election, not the self-healing properties of kubernetes native.

Because, if I just depend on self-healing properties of kubernetes, it will cause some downtime for my services. This is not expected.

So if above question is yes, then I want to know which HA mode is being supported? If no, then if you guys have any plan about it? Thanks in advance!

Actual Behavior

Steps to Reproduce the Problem

Additional Info

  • Kubernetes version:

    Output of kubectl version:

    (paste your output here)
    
  • Tekton Pipeline version:

    Output of tkn version or kubectl get pods -n tekton-pipelines -l app=tekton-pipelines-controller -o=jsonpath='{.items[0].metadata.labels.version}'

@xiujuan95 xiujuan95 changed the title Tekton-pipeline supports High availability Does tekton-pipeline support high availability now? Jun 3, 2020
@xiujuan95
Copy link
Contributor Author

Just now, I do an experiment. I setup 3 tekton controller:

xiangxiulis-MacBook-Pro:samples xiangxiuli$ k get pod -n tekton-pipelines
NAME                                          READY   STATUS    RESTARTS   AGE
tekton-pipelines-controller-c5db6cb65-5mkgz   1/1     Running   0          31m
tekton-pipelines-controller-c5db6cb65-mc2w7   1/1     Running   0          31m
tekton-pipelines-controller-c5db6cb65-pzfzx   1/1     Running   0          20h

And apply 1 task and 1 taskRun using your sample:

xiangxiulis-MacBook-Pro:buildpack-sample xiangxiuli$ k get task
NAME                 AGE
buildpacks-v3-test   44m
xiangxiulis-MacBook-Pro:buildpack-sample xiangxiuli$ k get tr
NAME                                   SUCCEEDED   REASON      STARTTIME   COMPLETIONTIME
buildpacks-run                         False       Failed      40m         39m

I found there will have 3 taskrun pods come up, but only one is working. The other two are alway running:

xiangxiulis-MacBook-Pro:buildpack-sample xiangxiuli$ k get pod
NAME                                             READY   STATUS             RESTARTS   AGE
buildpacks-run-pod-q685f                         0/9     Completed          0          40m
buildpacks-run-pod-qp4hv                         9/9     Running            0          40m
buildpacks-run-pod-wq64j                         9/9     Running            0          40m

So I think current tekton-pipeline doesn't support HA. If my understanding is wrong, pls correct me, thanks a lot!

@eddycharly
Copy link
Member

I tried the same setup a while ago and observed the same behaviour.

I don’t think it’s supported yet, though there is a leader election config map, I am not sure if this is possible at all.

@xiujuan95
Copy link
Contributor Author

@eddycharly Does it have a leader election configMap? Which one?

xiangxiulis-MacBook-Pro:buildpack-sample xiangxiuli$ k get cm -n tekton-pipelines
NAME                            DATA   AGE
config-artifact-bucket          0      75d
config-artifact-pvc             0      75d
config-defaults                 1      75d
config-logging                  3      75d
config-logging-triggers         4      75d
config-observability            1      75d
config-observability-triggers   1      75d
feature-flags                   2      75d

Could you tell me which one is for the leader-election, pls? Thanks!

@eddycharly
Copy link
Member

@eddycharly
Copy link
Member

Also, controller down doesn’t mean a downtime.
As long as your etcd is up, you will be able to register new jobs, they just won’t be processed without the controller but requests won’t be rejected.

@xiujuan95
Copy link
Contributor Author

xiujuan95 commented Jun 3, 2020

@eddycharly Thanks a lot! For above mentioned election configmap, if I apply it, then would the leader-election be processed?

@eddycharly
Copy link
Member

Hopefully... worth a try.
Let me know, I’m interested to know the answer ;)

@xiujuan95
Copy link
Contributor Author

@eddycharly I create above election configMap, and add it into the tekton controller deployment, but it still doesn't work as expected. Namely, it still sets up three taskRun pods.

@eddycharly
Copy link
Member

@xiujuan95 does your RBAC rules allow access to the config map ?

@xiujuan95
Copy link
Contributor Author

@eddycharly Maybe not. I am trying to modify my RBAC and I want to use this image github.com/tektoncd/pipeline/cmd/controller:latest, but it failed:

 Normal   Pulling    51s (x4 over 2m18s)  kubelet, 10.93.177.70  Pulling image "github.com/tektoncd/pipeline/cmd/controller"
  Warning  Failed     50s (x4 over 2m17s)  kubelet, 10.93.177.70  Failed to pull image "github.com/tektoncd/pipeline/cmd/controller": rpc error: code = NotFound desc = failed to pull and unpack image "github.com/tektoncd/pipeline/cmd/controller:latest": failed to resolve reference "github.com/tektoncd/pipeline/cmd/controller:latest": github.com/tektoncd/pipeline/cmd/controller:latest: not found
  Warning  Failed     50s (x4 over 2m17s)  kubelet, 10.93.177.70  Error: ErrImagePull
  Warning  Failed     36s (x6 over 2m17s)  kubelet, 10.93.177.70  Error: ImagePullBackOff
  Normal   BackOff    21s (x7 over 2m17s)  kubelet, 10.93.177.70  Back-off pulling image "github.com/tektoncd/pipeline/cmd/controller"

@eddycharly
Copy link
Member

github.com/tektoncd/pipeline/cmd/controller is not a docker image.
It's the github repository, the image is replaced by ko when ko apply is invoked.
I don't think you need to change the image, the leader election should come with knative. I suspect that if you have the leader election config map setup correctly and accessible it should work.

@xiujuan95
Copy link
Contributor Author

@eddycharly I think now, my RBAC rules allow access to the election config map.

xiangxiulis-mbp:buildpack-sample xiangxiuli$ k get role -n tekton-pipelines -o yaml
apiVersion: v1
items:
- apiVersion: rbac.authorization.k8s.io/v1
  kind: Role
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"default","app.kubernetes.io/part-of":"tekton-pipelines"},"name":"tekton-pipelines-controller","namespace":"tekton-pipelines"},"rules":[{"apiGroups":[""],"resources":["configmaps"],"verbs":["list","watch"]},{"apiGroups":[""],"resourceNames":["config-logging","config-observability","config-artifact-bucket","config-artifact-pvc","feature-flags","config-leader-election"],"resources":["configmaps"],"verbs":["get"]}]}
    creationTimestamp: "2020-06-03T08:39:14Z"
    labels:
      app.kubernetes.io/component: controller
      app.kubernetes.io/instance: default
      app.kubernetes.io/part-of: tekton-pipelines
    name: tekton-pipelines-controller
    namespace: tekton-pipelines
    resourceVersion: "4434681"
    selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/tekton-pipelines/roles/tekton-pipelines-controller
    uid: 490b322e-4f5e-43e4-8f03-e23bc1cd339f
  rules:
  - apiGroups:
    - ""
    resources:
    - configmaps
    verbs:
    - list
    - watch
  - apiGroups:
    - ""
    resourceNames:
    - config-logging
    - config-observability
    - config-artifact-bucket
    - config-artifact-pvc
    - feature-flags
    - config-leader-election
    resources:
    - configmaps
    verbs:
    - get
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

But, it still doesn't work.
Could you try it in your side, pls?

@afrittoli
Copy link
Member

The leader election is something that was recently added by the knative team in knative/pkg.
We inherited the configmap but we did not really do any integration work on Tekton side.

Background: knative/pkg#1181.
@mattmoor may have more context to share.

@xiujuan95 @eddycharly looking forward to the results of your experiments.

@afrittoli afrittoli added the kind/feature Categorizes issue or PR as related to a new feature. label Jun 3, 2020
@afrittoli afrittoli changed the title Does tekton-pipeline support high availability now? Support controller leader election in tekton-pipeline Jun 3, 2020
@mattmoor
Copy link
Member

mattmoor commented Jun 4, 2020

I have some changes brewing that will change how this works a bit. Those changes will enable reconcilers to be sharded across replicas and should help us scale reconcilers horizontally. However, I want to get y'all onto // +genreconciler first so it's ~free.

@tekton-robot
Copy link
Collaborator

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

Send feedback to tektoncd/plumbing.

@tekton-robot
Copy link
Collaborator

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.

/lifecycle rotten

Send feedback to tektoncd/plumbing.

@tekton-robot
Copy link
Collaborator

@tekton-robot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

Send feedback to tektoncd/plumbing.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@tekton-robot tekton-robot added the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Aug 14, 2020
@xiujuan95
Copy link
Contributor Author

Any updates for HA?

@vdemeester
Copy link
Member

/remove-lifecycle rotten
/remove-lifecycle stale
/reopen
/lifecycle frozen

@tekton-robot
Copy link
Collaborator

@vdemeester: Reopened this issue.

In response to this:

/remove-lifecycle rotten
/remove-lifecycle stale
/reopen
/lifecycle frozen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@tekton-robot tekton-robot reopened this Aug 17, 2020
@tekton-robot tekton-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Aug 17, 2020
@xiujuan95
Copy link
Contributor Author

@mattmoor

Just now, I did some experiments, I enabled leader-election for tekton-pipeline-controller. Yes, it works.

kubectl logs tekton-pipelines-controller-6564ff7589-cjdqw -n tekton-pipelines
2020/09/14 05:03:16 Registering 4 clients
2020/09/14 05:03:16 Registering 4 informer factories
2020/09/14 05:03:16 Registering 8 informers
2020/09/14 05:03:16 Registering 2 controllers
{"level":"info","caller":"logging/config.go:111","msg":"Successfully created the logger."}
{"level":"info","caller":"logging/config.go:112","msg":"Logging level set to info"}
{"level":"info","logger":"tekton","caller":"profiling/server.go:59","msg":"Profiling enabled: false","commit":"20da4cf"}
I0914 05:03:16.819066       1 leaderelection.go:241] attempting to acquire leader lease  tekton-pipelines/tekton...
{"level":"info","logger":"tekton","caller":"sharedmain/main.go:447","msg":"tekton will run in leader-elected mode with id tekton-pipelines-controller-6564ff7589-cjdqw_d768ec9f-8ee3-400b-b3fa-2c023caa6e0f","commit":"20da4cf"}
I0914 05:03:40.146129       1 leaderelection.go:251] successfully acquired lease tekton-pipelines/tekton
{"level":"info","logger":"tekton.event-broadcaster","caller":"record/event.go:274","msg":"Event(v1.ObjectReference{Kind:\"Lease\", Namespace:\"tekton-pipelines\", Name:\"tekton\", UID:\"d7c8536f-802f-4633-bc71-266711a43958\", APIVersion:\"coordination.k8s.io/v1\", ResourceVersion:\"90631532\", FieldPath:\"\"}): type: 'Normal' reason: 'LeaderElection' tekton-pipelines-controller-6564ff7589-cjdqw_d768ec9f-8ee3-400b-b3fa-2c023caa6e0f became leader","commit":"20da4cf"}

But when I tried to enter each pod and curl https://localhost:9090/metrics, only the active pod can return values. Other passive pod failed because below error:

sh-4.4# curl http://localhost:9090/metrics
curl: (7) Failed to connect to localhost port 9090: Connection refused

Is it only the active one that opens the Prometheus metrics port? I went through the code, but I don't find where can I confirm this? Could you pls help me about this? Thanks in advance!

@xiujuan95
Copy link
Contributor Author

xiujuan95 commented Sep 14, 2020

Under the help of my colleague @qu1queee, thanks! Yes, metrics for prometheus only works on the active pod.

Then this causes another problem for me. If I configure the tekton controller deployment to do liveness and readiness check like below:

livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /metrics
            port: 9090
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /metrics
            port: 9090
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1

Under this situation, only the active pod can be running, all passive pods will be CrashLoopBackOff because:

 Warning  Unhealthy  117s (x6 over 2m47s)  kubelet, 10.242.64.5  Readiness probe failed: Get http://172.30.83.152:9090/metrics: dial tcp 172.30.83.152:9090: connect: connection refused
  Warning  Unhealthy  117s (x6 over 2m47s)  kubelet, 10.242.64.5  Liveness probe failed: Get http://172.30.83.152:9090/metrics: dial tcp 172.30.83.152:9090: connect: connection refused

That's bad for me. So not sure if your guys can provide a better way to do liveness and readiness probes or not? Maybe it's related with this issue:#3111
Thanks in advance!

@gejunqiang
Copy link

@xiujuan95 Oh..It's so weird! I don't know what's wrong. But it works well for me.

@afrittoli
Copy link
Member

@gejunqiang I first set them like below:

spec:
      containers:
      - args:
        - (...omit other args)
        - -disable-ha
        - true
        name: tekton-pipelines-controller

I didn't add the double quotation marks.

You need the double quotation marks.

Now, although I can configure it like your method, my tekton-pipeline-controller pod can't come up and alway crash:

xiangxiulis-MacBook-Pro:spark xiangxiuli$ k logs tekton-pipelines-controller-74d8c98c44-hnbqc -n tekton-pipelines
2020/10/13 07:55:44 maxprocs: Leaving GOMAXPROCS=4: CPU quota undefined
2020/10/13 07:55:44 found unset image flags: [build-gcs-fetcher creds entrypoint git gsutil imagedigest-exporter kubeconfig-writer nop pr shell]
xiangxiulis-MacBook-Pro:spark xiangxiuli$ k get pod -n tekton-pipelines
NAME                                           READY   STATUS             RESTARTS   AGE
tekton-pipelines-controller-74d8c98c44-hnbqc   0/1     CrashLoopBackOff   4          2m28s
tekton-pipelines-webhook-8b4798587-wh89z       1/1     Running            0          22h

Do you have this issue?

This happens because you have the disable-ha set to true with no quotation marks on the top, so the rest of the args are not passed to the controller which then fails on missing flags.

@xiujuan95
Copy link
Contributor Author

@afrittoli I have double quotation mark. But the pod still is in crash status!

@xiujuan95
Copy link
Contributor Author

I found why the pod can't come up. 'disable-ha' should be added at the end rather than at the beginning:

spec:
      containers:
      - args:
        - (...omit other args)
        - -disable-ha
        - "true"
        name: tekton-pipelines-controller

If I change it to below, it will be fail:

spec:
      containers:
      - args:
        - -disable-ha
        - "true"
        - (...omit other args)
        name: tekton-pipelines-controller

The behavior is weird for me! Why?

@afrittoli
Copy link
Member

Uhm, this sounds like a bug to me

@xiujuan95
Copy link
Contributor Author

@afrittoli I found the following settings for disable-ha flag is incorrect:

spec:
      containers:
      - args:
        - (...omit other args)
        - -disable-ha
        - "false"
        name: tekton-pipelines-controller

We can't set it like above. Because above setting treats the value of disable-ha to be a string type, then this judge:https://github.com/tektoncd/pipeline/blob/v0.17.1/cmd/controller/main.go#L87 will be alway true whatever we set it to be "true" or "false". Only there has a value, then it will be alway true.

The correct setting is:

spec:
      containers:
      - args:
        - (...omit other args)
        - -disable-ha=false
        name: tekton-pipelines-controller

I am not sure if this is a bug or not for your side. If it's yes, pls help fix it. Thanks in advance!

BTW, I think there should have a doc to guide how to set these flags:https://github.com/tektoncd/pipeline/blob/v0.17.1/cmd/controller/main.go#L40-L56, otherwise, users maybe refer to some existed parameters, such as version, kubeconfig-writer-image etc, to set needed flags. This will cause some errors like I mentioned. FYI, @gejunqiang ^^

@afrittoli afrittoli added this to the Pipelines v0.18 milestone Oct 19, 2020
@afrittoli
Copy link
Member

I think the only bits left on this are documentation and perhaps testing.
I targeted this v0.18.0 because the feature was in fact already in v0.17.1, but no docs about it.
/cc @qu1queee

@xiujuan95
Copy link
Contributor Author

@afrittoli Now, I am using 0.17.1 tekton release. Yes, it has supported leader election. But there has another problem I find.
I deploy three tekton-pipeline-controller pod:

xiangxiulis-mbp:~ xiangxiuli$ k get pod -n tekton-pipelines
NAME                                           READY   STATUS    RESTARTS   AGE
tekton-pipelines-controller-58c8868475-bffgb   1/1     Running   0          133m
tekton-pipelines-controller-58c8868475-x8bh9   1/1     Running   0          133m
tekton-pipelines-controller-58c8868475-t2mcv   1/1     Running   0          133m

And I checked the log of each pod and found tekton-pipelines-controller-58c8868475-t2mcv is the leader.

Next, I will verify if leader election can work well. So I delete the leader tekton-pipelines-controller-58c8868475-t2mcv.

kubectl delete pod tekton-pipelines-controller-58c8868475-t2mcv -n tekton-pipelines

Then, I checked the log of rest two pods, I found both of them has below logs:

 kubectl logs tekton-pipelines-controller-58c8868475-bffgb -n tekton-pipelines
... ...
{"level":"info","logger":"tekton.github.com-tektoncd-pipeline-pkg-reconciler-pipelinerun.Reconciler","caller":"controller/controller.go:520","msg":"Reconcile succeeded. Time taken: 23.796µs","commit":"03b239e","knative.dev/traceid":"07062ee9-ceee-442f-8f14-ea7f98e825d1","knative.dev/key":"1cfi03u0j5h/buildpacks-v3-7vc5n"}
I1102 09:21:38.110435       1 leaderelection.go:252] successfully acquired lease tekton-pipelines/tekton.github.com-tektoncd-pipeline-pkg-reconciler-pipelinerun.reconciler.00-of-01
{"level":"info","logger":"tekton","caller":"leaderelection/context.go:143","msg":"\"tekton-pipelines-controller-58c8868475-bffgb_8be40b15-2a57-4687-adfd-4460b5fde513\" has started leading \"tekton.github.com-tektoncd-pipeline-pkg-reconciler-pipelinerun.reconciler.00-of-01\"","commit":"03b239e"}


kubectl logs tekton-pipelines-controller-58c8868475-x8bh9 -n tekton-pipelines
... ....
{"level":"info","logger":"tekton.github.com-tektoncd-pipeline-pkg-reconciler-taskrun.Reconciler","caller":"controller/controller.go:520","msg":"Reconcile succeeded. Time taken: 18.002µs","commit":"03b239e","knative.dev/traceid":"8b6dfd33-65b2-4a9f-b034-9802defaaf6a","knative.dev/key":"1cfi03u0j5h/buildpacks-v3-7vc5n-rkq5q"}
I1102 09:21:36.771499       1 leaderelection.go:252] successfully acquired lease tekton-pipelines/tekton.github.com-tektoncd-pipeline-pkg-reconciler-taskrun.reconciler.00-of-01
{"level":"info","logger":"tekton","caller":"leaderelection/context.go:143","msg":"\"tekton-pipelines-controller-58c8868475-x8bh9_45729873-a06a-41eb-ae3e-ef4b5095789b\" has started leading \"tekton.github.com-tektoncd-pipeline-pkg-reconciler-taskrun.reconciler.00-of-01\"","commit":"03b239e"}

It's so strange for me. Why both of them are starting to lead? Which one is the leader? Against the log, I can't get which is the leader.

So could you pls help me to distinguish which one is leader? Or which log is the sign of leader? Thanks in advance!

@afrittoli
Copy link
Member

The HA setup is an active/active one:

  • the reconcile queue is sharded into multiple buckets
  • each reconciler is leader for one bucket only
  • if a reconcile fails, the content of its bucket is divided across the remaining reconcilers

@xiujuan95
Copy link
Contributor Author

@afrittoli I see. So at the same time, it's possible that two leaders which are responsible for two different buckets are handling two different reconcilers, right?

@afrittoli
Copy link
Member

Yep, exactly.
@mattmoor in case you want to add something

@afrittoli
Copy link
Member

@xiujuan95 would you be happy to close this one now?

@qu1queee
Copy link
Contributor

qu1queee commented Nov 2, 2020

@xiujuan95 I will provide you more insights on the behaviour internally in IBM, there is a document from Knative around this. Sorry for not sharing this earlier.

@xiujuan95
Copy link
Contributor Author

@afrittoli Yes, I think we can close this issue, now. Thanks!

@xiujuan95
Copy link
Contributor Author

@afrittoli Sorry, just for confirming.

I went through tekton-pipeline-webhook's code, I found webhook was executing same logic with controller:
webhook is calling WebhookMainWithConfig : https://github.com/tektoncd/pipeline/blob/v0.17.1/cmd/webhook/main.go#L216, and MainWithConfig is assigning value to WebhookMainWithConfig: https://github.com/tektoncd/pipeline/blob/v0.17.1/vendor/knative.dev/pkg/injection/sharedmain/main.go#L140.

Then controller and webhook go into a same logic MainWithConfig : https://github.com/tektoncd/pipeline/blob/v0.17.1/vendor/knative.dev/pkg/injection/sharedmain/main.go#L177

Does this mean webhook also support leader-election? And it's leader-election mode is same with controller?

And if it's true, then if also I can set disable-ha flag in webhook deployment?

Pls help me about this, thanks in advance!

@pritidesai
Copy link
Member

@afrittoli please help verify this, thanks 🙏

tekton-robot pushed a commit that referenced this issue Nov 5, 2020
This adds documentation around HA support for the tekton pipeline controller.
HA is enabled by default, therefore adding more information on the behaviour
and how could devs/maintainers use it.
@xiujuan95
Copy link
Contributor Author

Any updates about this comment: #2735 (comment) ?

@afrittoli afrittoli self-assigned this Nov 30, 2020
@afrittoli
Copy link
Member

The Tekton webhook controller includes five different controllers:

  • Certificate controller: it uses leader election to select only one controller that is responsible for provisioning and maintaining the certificate secret
  • DefaultAdmission controller: the Admit function does not rely on leader election. Leader election is used by the Reconcile function, which is responsible for configuration of the controller
  • ValidationAdmission controller: the Admit function does not rely on leader election. Leader election is used by the Reconcile function, which is responsible for configuration of the controller
  • ConfigValidation controller: the Admit function does not rely on leader election. Leader election is used by the Reconcile function, which is responsible for configuration of the controller
  • Conversion controller: the Convert function does not rely on leader election. Leader election is used by Reconcile and reconcileCRD

According to knative docs, the disable-ha flags is only meant to used if a specific issue is found in the HA behaviour, otherwise it's recommended to be kept on.
In case of the Tekton webhook specifically, some parts of it rely on the leader election, so HA should not be disabled unless only a single copy of the webhook is running.

@afrittoli
Copy link
Member

@xiujuan95 let me know if this answer is satisfactory. I will close the issue now but feel free to re-open (or open a new one) should anything be missing.

@xiujuan95
Copy link
Contributor Author

@afrittoli Thanks for your kind reply! Above answer makes sense for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

10 participants