-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace operator-framework/operator-sdk/pkg/ready #612
Replace operator-framework/operator-sdk/pkg/ready #612
Conversation
As part of the decoupling from the `operator-sdk` code base, there is the need to introduce a replacement for their convenience `ready` package. Add new `ready` package within `build` that has a comparable usage style. Add code to make the `ready` filename configurable. Change default `ready` filename to remove `operator` reference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @HeavyWombat , nice.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: qu1queee The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
The branch misses Improve test stability #614 which is causing some tests to fail. But, there is a green integration and e2e test in at least one K8s version. Therefore manually merging. |
I stumbled across this new It seems to only write the file then delete it when Even if it did, I wonder if we need a readinessProbe at all. Readiness probes are intended to tell a Service that traffic should be routed to the Pod, but in our case, AFAIK, there are no Services that read the status of the readiness probe. |
Good question. At the time, I just decided to take the operator SDK based I agree that it does not very much for us. Correct me if I am wrong, but we could do with just a liveness probe configured against the |
Yes, Jason is right, readiness checks make only sense for services so that Kube can rule out pods that currently cannot accept traffic. For the liveness check (which uses the same logic), the file-based approach is imo reasonable. For the readiness check, we discussed to make that depending on the the pod having leadership. This can use the same file-based approach, just that the file would only get written once the instance became leader. Then one can have a Service for the metrics endpoint and requests would only go to this leader. |
I don't know if I agree, unless there's some code to delete the file when the container isn't live anymore. IMO having a liveness check that hits an HTTP endpoint would be able to more easily/accurately determine whether the container is running and healthy than signaling that with the presence/absence of a file. Tekton does this with a simple HTTP handler that always returns |
@imjasonh yep, agree, you are bringing up questions I also had. So far, the issues we have seen where the controller was not anymore in good shape, always were when it either had a panic, or when it had connectivity issues with the kube-api-server and could not renew the lease. In case of a panic, the process died and kube would restart the pod. In case of the lease-renewal problem, I assume that it also terminates, but I am not sure. |
Changes
As part of the decoupling from the
operator-sdk
code base, there is the needto introduce a replacement for their convenience
ready
package.Add new
ready
package withinbuild
that has a comparable usage style.Add code to make the
ready
filename configurable.Change default
ready
filename to removeoperator
reference.This addresses part of #584
See operator-framework/operator-sdk#3476 for details regarding the recommendation to create an own
ready
helper package if required.Submitter Checklist
See the contributor guide
for details on coding conventions, github and prow interactions, and the code review process.
Release Notes