-
-
Notifications
You must be signed in to change notification settings - Fork 326
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade k8s-openapi
to 0.16
#1008
Conversation
k8s-openapi
to 0.16
Signed-off-by: clux <[email protected]>
Signed-off-by: clux <[email protected]>
Signed-off-by: clux <[email protected]>
Signed-off-by: clux <[email protected]>
Signed-off-by: clux <[email protected]>
Signed-off-by: clux <[email protected]>
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## master #1008 +/- ##
==========================================
+ Coverage 71.89% 71.93% +0.03%
==========================================
Files 65 65
Lines 4637 4647 +10
==========================================
+ Hits 3334 3343 +9
- Misses 1303 1304 +1
|
Integration test failure from an integration test in with
looks veeery similar to the original bug EDIT: |
happens exactly on the foos.delete_collection(&DeleteParams::default(), &Default::default()).await? line. so we can reproduce Arnav's issue on k3d locally with k3d flag: |
Ok, I think the only thing we NEED to do here is to force in the kind and apiVersion of apiVersion: meta.k8s.io/v1
kind: DeleteOptions to the output. Never done this before, but how hard can it be. |
Signed-off-by: clux <[email protected]>
Signed-off-by: clux <[email protected]>
Signed-off-by: clux <[email protected]>
Signed-off-by: clux <[email protected]>
Well, great. Serializing with kind/api_version fails in Kubernetes v1.20 (our MK8SV) with:
|
Guess it could be wrapped in an |
Signed-off-by: clux <[email protected]>
Signed-off-by: clux <[email protected]>
Signed-off-by: clux <[email protected]>
Well, this is really quite bad. It's not even trivial for users to deal with this because it effectively forces the user to deal with a zero version skew (against our versioning policies). We have two error cases: Serializing kind/apiVersion breaks on Kubernetes < 1.25
Not serializing kind/apiVersion breaks on Kubernetes >= 1.25
So the user gotta pick the I don't see any good solutions for this at the moment. Comments welcome. |
Have reported findings upstream in kubernetes/kubernetes#111985 . In the mean time, options as I see are:
I don't like 3, because historically it is always very confusing for users to have to pick an older k8s-openapi. So Leaning towards 1 with a caveated release and hope pressure can help upstream. |
Signed-off-by: clux <[email protected]>
Signed-off-by: clux <[email protected]>
Ok, have added an unstable evar to track this because there's no good way to support v1_25 without the user making a choice here. Why?
Thus, we need to make this an opt in thing. Either via an unstable EVAR or via an unstable feature. Planning on removing the hacky evar in |
basically special-case the empty params case with an internal is_default (cannot do Eq/PartialEq because upstream types do not have it). which does mean delete_collection with default delete params keeps on working on 1.25 (but then immediately breaks if they change from default without the evar) Signed-off-by: clux <[email protected]>
Honestly I'd just say upstream K8s is broken here and let them fix it (option 1). Especially since it looks like they're already working on a fix, and that I'd wager that there isn't a single K8s cluster in the world that is only running kube-rs clients. |
It's not too different from what we are doing now tbh. We don't have to announce the evar as a workaround (given it is going away if they are fixing it) if that is better, but we should really have the workaround there for us to at least keep CI happy. |
Does it still fail with f440415 ? I think integration tests are all using the default. |
|
So Don't we need to work around this? But apparently it's already passing the tests. |
Ah, sorry, I am wrong.
|
So we can technically remove all this evar gunk for ci 🤔 Personally, I feel like having valid work around for an actual release is useful here. Otherwise users have no way around this if they are testing with v1_25 - save for recreating their cluster with an older version - and we can take out the hack when they fix it upstream without much ceremony. But it is an awkward evar lookup on the other hand. Will remove the workaround if both of you feel the same. |
Signed-off-by: clux <[email protected]>
Anyone for an approval for this? |
I'm still debating whether this workaround should be included or not. Options:
1 is most convenient for users, but I'd prefer not to include this hack, especially because most users won't be using 1.25. I'm leaning towards not including (3). |
I really don't think it is a big deal either way, I just don't want us to say we don't support a fully released Kubernetes version because we don't think it's needed yet. That sends a bad signal. Happy to move the workaround out to a branch though, that is workable for keen users. And since both of you have reservations about including it, it'll at least be less controversial :-) |
Signed-off-by: clux <[email protected]>
Signed-off-by: clux <[email protected]>
Signed-off-by: clux <[email protected]>
Released in 0.75.0. Release notes are currently stashed in the draft release for 0.75.0. Will publish that tomorrow (with updated code links) at some point after the docs.rs build completes. Tried to highlight on user actions needed and made a fourth point for all the improvements. |
cargo upgrade k8s-openapi --workspace
for 0.16.0v1_20
usingjust bump-k8s
TL;DR: upstream bug is bad, breaking, and we need to do something to support v1_25, but we cannot detect it, so it needs to be user-opt in. Have left the evar-based fix in a branch. See comments below.