-
Notifications
You must be signed in to change notification settings - Fork 689
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TLSCertificateDelegation does not work with networking.k8s.io/v1 Ingress #3544
Comments
In the meantime I found https://projectcontour.io/resources/philosophy/#we-meet-users-where-they-are - looks like Ingress v1 is not yet supported. Was not aware of that and also the Compatibility Matrix nor the Support Policy mention this. Can someone please clarify which k8s Ingress resources are supported? Probably it would also be good to document this (can help here as soon as it's clear which Ingress versions are supported by contour) |
I’ve got a document in the works for detailing Ingress support, right now planning on describing the main features/fields and how they differ between Ingress versions (if there is a difference) and will include this detail |
To answer your original question @ghouscht, Ingress/v1 make some additional restrictions on the API validation to not allow forward slashes ( In addition to the v1 support doc that @sunjayBhatia is working on, I think we should also document this limitation with IngressV1 & TLSCertDelegation. |
Thank you for your answers. Are there any plans to support TLSCertificateDelegation also with v1 Ingress (maybe through an annotation, or something similar)? This is a bit of a ticking time bomb since we don't want to use the custom httpproxy resource mainly because of portability reasons and since we use wildcard certs we rely on the availability of TLSCertificateDelegation. |
Details which features we support, caveats, gotchas Fixes: projectcontour#3546 Fixes: projectcontour#3469 Updates: projectcontour#3544 Signed-off-by: Sunjay Bhatia <[email protected]>
I'm not sure if we've had asks for this yet, but yeah I could pretty much only think of this being supportable in Ingress v1 in an annotation, though we would have to be pretty exact how we would expose that, since Ingress TLS config has the "Hosts" field, we would need to replicate that in an annotation which could get messy |
Edit: I should really check the PR before commenting. Given the changes to Ingress v1 seem to specifically stop TLSCertificateDelegation from being used, I'm not sure how best to bring it along. I seem to recall some Gateway API discussions about bringing TLS certificates across namespaces, for the wildcard use case, so we may be able to take an idea from there as well. It does seem like an annotation may be the only way to go, but I agree it will need careful design if we do that. In general, we're trying to keep away from annotations on Ingress, but maybe this is an exception. |
cc @xaleeks |
* Ingress: Add detailed documentation Details which features we support, caveats, gotchas Fixes: #3546 Fixes: #3469 Updates: #3544 Signed-off-by: Sunjay Bhatia <[email protected]>
Any news about this problem ? For information, we are using contour in vsphere with tanzu and we have contour support in our wmare licence. |
The best way to use Contour with TLSCertificateDelegation is to use our HTTPProxy CRD, and I would recommend using that if you need something right away. As a bonus, you'll get a bunch of more advanced functionality too! As you can see above, the design of Ingress v1 specifically excludes the hack we were using the be able to use TLSCertificateDelegation with Ingress v1beta1, so we need to do a design exercise to think about how we could do something to allow it again. This will take a while and be tricky, as we're going against how the API is designed. |
Going to echo Nick Young here that this is best experienced on HTTPProxy. We are aware v1beta is going away in k8s v1.22. What is the portability reason you mentioned that lead you to not using HTTPPoxy? @ghouscht @barbaluc If this is vSphere with Tanzu, I’m guessing you’re deploying contour on a @sunjayBhatia thanks for amending the docs to reflect that it only works with v1beta resources at the moment. |
@xaleeks you're right, we are deploying contour on a TanzuKubernetesCluster and we have also opened a support ticket. We keep our secret (a wildcard corporate certificate) in a different namespace because as described in this blogpost about Gateway API (https://kubernetes.io/blog/2021/04/22/evolving-kubernetes-networking-with-the-gateway-api/#what-does-the-gateway-api-look-like) we are the "Cluster operators" team, so we are providing namespaces to "Application developers" where they can deploy their apps with their ingress using the wildcard certificate. For security reasons we don't want "Application developers" to be able to read and copy the wildcard certificate, so the TLSCertificateDelegation allows us to do this "Administrative Delegation" (https://kubernetes.io/blog/2021/04/22/evolving-kubernetes-networking-with-the-gateway-api/#administrative-delegation) of the wildcard certificate by having it on a namespace where only "Cluster operators" have access. Regarding the HTTPPoxy we try to avoid it for two reasons. The first is that we prefer to use "vanilla" helm charts when installing external tools like Harbor (https://github.com/goharbor/harbor-helm) and they are mainly using Ingress. So to use them with HTTPProxy we have to fork the chart or to create an umbrella chart with a dependency on the vanilla chart and adding HTTPProxy object in a template directory. The second is that we have multiple Kubernetes platforms, an internal new one with Tanzu using Contour and a legacy public one on AWS using Nginx. So we prefer to use the same object Ingress as our developer teams have to interact with both clusters. I'm a teammate of @barbaluc :) |
Thanks for this context @pablo-ruth . You're right, the harbor helm would have difficulty with HTTPProxy right now, it only supports Ingress and settings are applied across all Ingress objects. I would just add that the harbor helm is forked quite heavily for customizations like Ingress. Not trying to escape from this problem, just some additional color here. But it's not something we can really solve in Contour here, Harbor in this case is treated as the application layer and it pains me to say that the harbor helm chart is too rigid at the moment. But also thinking about this from harbor's POV, it's near impossible to support all vendor specific CRDs. This is why Gateway API is the future. Happy to learn you're leveraging the TLSCertificateDelegation to provide ingress across multiple WCP namespaces. As to this request, it's going to take some thought from @youngnick and team on whether we will actually invest here. I don't see an easy fix to your problem if you're having to use the upstream harbor helm. I'm also curious, does the nsx advanced loabalancer that comes with vSphere with Tanzu not have this capability? It is installed on the Supervisor cluster and much more paravirtualized in its integration. |
Mainly because of the reasons @pablo-ruth already mentioned and also the fact that we already have hundreds of vanilla k8s Ingress objects spread across multiple teams which is a bit challenging. I'm eagerly waiting for the new Gateway API as I think this will solve a lot of the problems and hopefully the community around ingress controllers will adapt and converge a bit on this new api. |
@xaleeks FYI we migrated this week from haproxy to NSX ALB but we have a Tanzu Standard license which only gives us the ALB layer 4, so we cannot do the TLS termination with it. We decided to fork Contour to add an "ugly" hack where we use "." instead of "/" as the delimiter between namespace and secret name, and we hope to have a better solution in the future. diff --git a/internal/dag/ingress_processor.go b/internal/dag/ingress_processor.go
index 8823cc81..e79e2c1b 100644
--- a/internal/dag/ingress_processor.go
+++ b/internal/dag/ingress_processor.go
@@ -63,7 +63,7 @@ func (p *IngressProcessor) Run(dag *DAG, source *KubernetesCache) {
func (p *IngressProcessor) computeSecureVirtualhosts() {
for _, ing := range p.source.ingresses {
for _, tls := range ing.Spec.TLS {
- secretName := k8s.NamespacedNameFrom(tls.SecretName, k8s.DefaultNamespace(ing.GetNamespace()))
+ secretName := k8s.NamespacedNameFrom(strings.Replace(tls.SecretName, ".", "/", 1), k8s.DefaultNamespace(ing.GetNamespace()))
sec, err := p.source.LookupSecret(secretName, validSecret)
if err != nil {
p.WithError(err). |
@pablo-ruth Thanks for this context. Let's keep this open for additional dialogue. Feel free to email me at [email protected] to discuss in more detail |
Would be helpful if this issue is linked in the docs and/or readme and that auto cert gen with standard Ingress and cert-manager is (actually) not working with latest k8s spec. Just spent 2 hours on this till I found this issue... |
I agree that we should document this limitation, thanks for pointing that out @205g0, and sorry to make this harder for you. I'll add the |
This PR should cover it now #3469 We can close for now |
What steps did you take and what happened:
Technically this is not a contour issue, but in practice it breaks TLSCertificateDelegation with networking.k8s.io/v1 Ingress resource. As of today we can use a networking.k8s.io/v1beta1 Ingress like this:
Please note; The scretName with the Namespace
kube-contour
in the TLS section.Applying this definition does work, but issues a warning about the upcoming deprecation of networking.k8s.io/v1beta1 Ingress:
When trying to migrate such an Ingress definition to networking.k8s.io/v1 one would rewrite it to:
Applying this definition results in:
What did you expect to happen:
I think there should be a way for users to use TLSCertificateDelegation also with networking.k8s.io/v1 Ingress. Maybe there is already a way how we could solve this, but I was not able to find something. Currently this is not a serious issue, since k8s 1.22 is not yet around the corner, but this will be a serious issue for us.
Anything else you would like to add:
Environment:
v1.14.0
kubectl version
):/etc/os-release
):The text was updated successfully, but these errors were encountered: