-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Decide where to move this repo to now that PSG has closed down #33
Comments
Do you have a new location in mind? I guess we should change the default label/annotation prefix. Even if we change the default, we could make it configurable, so people with existing clusters can keep the old one in service. |
In preparation for this I added options to support overriding the annotation/label prefix and the Certificate namespace to the #30 'class' branch. So if we move and update the code to a new domain, people with existing clusters can run a backward compatible version by including these options.
Or alternatively you can include your own custom domain and use the improved class label approach with options like:
|
@luna-duclos If the repo is remaining on github, rather than replacing this one with a README, the 'Transfer ownership' button could be used to create a redirect. Redirects are more friendly, so if that works, I'd much prefer that. |
Would this fit as a Kubernetes Incubator project? This seems like the sorta thing that would fit there due to how closely related to the Ingress apis it is, and also because over time it would be nice to standardize Certificate TPRs among any other things that need them (e.g. ingress controllers themselves actually). |
That's a good question, I'll go over the spec and if it seems a good match, poke some people to see if they also think it's a good idea. |
I believe you can transfer a repo without breaking folks' links: https://help.github.com/articles/about-repository-transfers/ (Github automatically proxies when you rename a repo, I suspect it does when you transfer too). Definitely worth at least suggesting this as an incubator project if it's going to continue to be worked on. Note that there is another project doing ingress certs (https://github.com/jetstack/kube-lego), but that one is tightly coupled to the ingress implementation, unlike this project. Tim Hockin recently sponsored a networking incubator project (https://github.com/kubernetes-incubator/external-dns), he might be a good person to approach for guidance here -- I'll delay @-mentioning him in case you have someone else you'd prefer to reach out to. |
I think moving this to the kubernetes incubator makes more sense than me moving it to my personal github namespace. @paultiplady @euank, who would you two suggest as initial points of contact for this ? |
@thockin -- this project provides a simple way to generate LetsEncrypt certs in a k8s cluster, any guidance on whether it would be suitable for an incubator project? It's complementary to kube-lego as it doesn't assume an Ingress, so it enables TLS-to-the-pod. |
Hi Paul!
I have a bunch of questions about this.
Is this abandoned or actively developed?
Can it be folded into kube-lego or vice-versa?
What would be the criteria for "graduation" ?
Does it actually benefit from being in Kubernetes orgs, vs being in a
dedicated org or somewhere else?
We don't have a lot of bandwidth to manage new projects without active
maintainers, and we're sensitive to the incubator becoming a dumping ground
for unmanaged things.
…On Mon, Apr 17, 2017 at 9:35 AM, Paul Tiplady ***@***.***> wrote:
@thockin <https://github.com/thockin> -- this project provides a simple
way to generate LetsEncrypt certs in a k8s cluster, any guidance on whether
it would be suitable for an incubator project?
It's complementary to kube-lego as it doesn't assume an Ingress, so it
enables TLS-to-the-pod.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#33 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVJD93mTidSK48LhAgubAWipscSgEks5rw5TegaJpZM4Lmizt>
.
|
Hi, Folks, We are working on providing strong authentication for service to service communication on k8s (https://github.com/istio/auth). And we also planned to cover enduser to service authentication in the near future. Having a system to auto manage certificate is very useful to us. We can share our design doc if you are interested in more details. Happy to chat with you how our work can benefit each other and if your work fits in istio org: https://github.com/istio Cheers! Wencheng |
Hey Tim! I can vouch for this project being active, @luna-duclos has been very responsive with the couple of issues I've raised recently. I think this does offer something that would be a useful primitive in the k8s feature-set, and being concerned with encryption key material, is quite sensitive; perhaps there is some benefit to making it somewhat-official in that regard. I'll defer to Luna on the rest -- in particular folding this into another project could be a good outcome (though I quite like that it's a small component that's easy to reason about). |
Kube-lego has a lot of users, and is well regarded, but only does certs for
Ingress (AFAIK). This does certs for pods, which is really attractive.
I think this is a place where less options, with a more robust feature set
is the right path.
We still. Need to decide if it makes sense to exist in our org(s) but
that's secondary to me.
On a personal note, this scratches an itch I personally have, so I am
interested in it :)
…On Apr 20, 2017 11:34 AM, "Paul Tiplady" ***@***.***> wrote:
Hey Tim!
I can vouch for this project being active, @luna-duclos
<https://github.com/luna-duclos> has been very responsive with the couple
of issues I've raised recently.
I think this does offer something that would be a useful primitive in the
k8s feature-set, and being concerned with encryption key material, is quite
sensitive; perhaps there is some benefit to making it somewhat-official in
that regard.
I'll defer to Luna on the rest -- in particular folding this into another
project could be a good outcome (though I quite like that it's a small
component that's easy to reason about).
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#33 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVNzkC1wsMdZdh3QSqWTG1C-fnRhcks5rx4j5gaJpZM4Lmizt>
.
|
Adding to @thockin - |
@thockin: I almost think this project as-is is mature and stable enough to move to the kubernetes namespace. |
@wlu2016 thanks for the note! I'll have a look at istio. |
@wlu2016 you might be misunderstanding the scope of this project. service<->service communication will likely need an internally trusted and managed ca. |
My 2c for @thockin's questions:
Actively developed, fairly low velocity.
Potentially. From my viewpoint, this into kube-lego might make sense, if only because they have more popularity (at least by github stars). I also haven't looked at kube-lego enough to be certain it matches up well. Their codebase looks nice at a glance, and I suppose copying over the TPR and a few other things would effectively fold this in, but it's possible it would be easier said than done and no clue what they'd think about it (cc @simonswine)
This being deployed as an addon alongside ingress by default probably.
That's a tossup in my mind. I don't think the incubator project has been going long enough to really be certain of the tradeoffs, and I can't find enough ground to stand on in either way to form a strong opinion. |
@euank, there are several scenarios we (Istio) need certificate associated with public domain names. For example, end user traffic (from browser and mobile app), ingress, services behind GCLB, etc. |
FWIW, @thockin, I've had this in production for a few clients for 3-4 months now. The DNS verification and TPRs were the essential features for our use cases. @luna-duclos has been very responsive and there's an active community around here. |
Hi @thockin we originally used kube-lego and later switched to kube-cert-manager. Frankly kcm is more capable and a lot easier to deploy and manage that kube-lego (was - it may well have improved since I used it). We use kcm for both DNS challenges (multiple DNS API accounts) and HTTP challenges (for client domains we don't control). I contributed a 'class' label system analogous to the the nginx Ingress Controller 'class' annotation. Kcm been working great for us in production - it's a no drama service. The DNS challenges allow us to manage certificates for services that have no Ingress. We can issue certs with kcm for cluster-hosted SMTP and LDAP servers, and for cluster-internal Services that have no Ingress. Since the addition of support for 'class' labels and ability to issue SAN certificates, we consider kcm feature complete for us. So if incubating, it would be more after refining the code base and adding tests. I looked at kube-lego recently and the only feature I'd like to port from there that kcm is perhaps missing is using a dynamically updated Ingress to route HTTP challenges (#42), without having to touch the Ingress Controller config. Currently I added a global routing for ACME challenge requests to kcm in the Ingress Controllers configs. A strength of kcm vs kube-lego is that use narrows API watchs by 'class' label as well as namespace(s). This means each kcm instance watches only changes to the Ingress and TPR Certificate resources that instance manages. On a very large cluster you can use a per-project or per-DNS-account kcm instances and not waste time/overhead examining every Ingress/Certificate change. Projects like kube-lego and the Ingress Controllers have a tendency to use annotations and thus every instance of the service processes every Ingress change in the entire, potentially 5000 node, cluster. Regards automated builds, I set that up for myself using AWS CodeBuild and contributed it (in the codebuild folder). It just need a trigger added. There's a CloudBuild PR incoming. We could also use Travis if that is preferred. After hyping the project a bit above, my criticism of this project is that it lacks an automated test suite. If incubated I'd like to see tests added with decent code coverage. A bit of mild refactoring wouldn't hurt it either - though the code base is actually very small if you look at it, as the lego library does most the hard work. @luna-duclos reviewed my PR's in a timely manner plus insisted on documentation updates first, so I'm happy 👍 |
On Thu, Apr 20, 2017 at 7:31 PM, Aaron Roydhouse ***@***.***> wrote:
Hi @thockin we originally used kube-lego and later switched to kube-cert-manager. Frankly kcm is more capable and a lot easier to deploy and manage that kube-lego (was - it may well have improved since I used it). We use kcm for both DNS challenges (multiple DNS API accounts) and HTTP challenges (for client domains we don't control). I contributed a 'class' label system analogous to the the nginx Ingress Controller 'class' annotation. Kcm been working great for us in production - it's a no drama service.
The DNS challenges allow us to manage certificates for services that have no Ingress. We can issue certs with kcm for cluster-hosted SMTP and LDAP servers, and for cluster-internal Services that have no Ingress.
Since the addition of support for 'class' labels and ability to issue SAN certificates, we consider kcm feature complete for us. So if incubating, it would be more after refining the code base and adding tests. I looked at kube-lego recently and the only feature I'd like to port from there that kcm is perhaps missing is using an dynamically updated Ingress to route HTTP challenges, without having to touch the Ingress Controller config. Currently I added a global routing for ACME challenge requests to kcm in the Ingress Controllers configs.
Does this work for Google Cloud LB, for example?
… A strength of kcm vs kube-lego is that use narrows API watchs by 'class' label as well as namespace(s). This means each kcm instance watches only changes to the Ingress and TPR Certificate resources that instance manages. On a very large cluster you can use a per-project or per-DNS-account kcm instances and not waste time/overhead examining every Ingress/Certificate change. Projects like kube-lego and the Ingress Controllers have a tendency to use annotations and thus every instance of the service processes every Ingress change in the entire, potentially 5000 node, cluster.
Regards automated builds, I set that up for myself using AWS CodeBuild and contributed it (in the codebuild folder). It just need a trigger added. There's a CloudBuild PR incoming. We could also use Travis if that is preferred.
After hyping the project a bit above, my criticism of this project is that it lacks an automated test suite. If incubated I'd like to see tests added with decent code coverage. A bit of mild refactoring wouldn't hurt it either - though the code base is actually very small if you look at it, as the lego library does most the hard work.
@luna-duclos reviewed my PR's in a timely manner plus insisted on documentation updates first, so I'm happy
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
@thockin are you asking if KCM requires an Ingress Controller like No KCM does not require an Ingress Controller. Yes, KCM will work fine with Google Cloud LB's. Firstly you can use DNS challenges, which is the primary method used by KCM and don't require an Ingress or LoadBalancer or any incoming Internet access at all. This is my preferred method. Second, if you need to use an HTTP challenge, e.g. for a domain name where you don't control the DNS zone, KCM supports LoadBalancers or Ingresses or NodeIPs or HostIPs, the only requirements for HTTP challenges is that:
(To be fair, even though |
OK, Well, I think it's a cool and useful project. I could endorse it. I'd
like to see a conversation with kube-lego about whether we could come
together into one community-backed effort, still.
if someone wants to write up an incubator proposal I can sponsor.
…On Mon, May 1, 2017 at 8:05 PM, Aaron Roydhouse ***@***.***> wrote:
Does this work for Google Cloud LB, for example?
@thockin <https://github.com/thockin> are you asking if KCM requires an
Ingress Controller like kube-lego does, or can KCM also work with
LoadBalancers and no Ingress?
No KCM works without an Ingress Controller. Yes, KCM will work fine with
Google Cloud LB's.
Firstly you can use DNS challenges, which is the primary method used by
KCM and don't require an Ingress or LoadBalancer or any incoming Internet
access at all. This is my preferred method.
Second, if you need to use an HTTP challenge, e.g. for a domain name where
you don't control the DNS zone, KCM supports LoadBalancers or Ingresses or
NodeIPs or HostIPs, the only requirements for HTTP challenges is that:
- The certificate target domain name(s) resolve to a public IP (KCM
doesn't need to know what that IP is), and
- HTTP requests for the '.well-known/acme-challenge' path route to the
KCM Service or Pod
(To be fair, even though kubo-lego only officially works with Ingresses,
I think you could use a dummy Ingress and route the requests directly to
the kube-lego Service or Pod and that would still work.)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#33 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVLZ-rC_zlLJMrjVNuvd4FykXKthRks5r1p1tgaJpZM4Lmizt>
.
|
Thanks @thockin I would welcome an eventual merged or successor project. My impression is the On a maturity basis, you would thing that KCM's features should be ported to In my view a successor project for both may be better to reformulate the feature set. I feel what is needed is to look at what both projects have achieved as basically mature, and then more on to:
|
@whereisaaron, so after deploying kcm, we immediately ran into the issue how to update dns records. At first I thought the logical next step is to file an issue in xenolf/lego since it only manages TXT records. But after looking around a bit more, Kubernetes incubator already have a project to tackle the dns issue (https://github.com/kubernetes-incubator/external-dns). external-dns is trying to replace several existing projects (kops dns controller/Mate/route53-kubenetes). I would hate to have multiple projects reinvent the wheel over the same issue, and each of them support a different subset of DNS providers. |
@luna-duclos sorry to hear that - I'll see if there is a template for the proposal and see if I can get something started. |
Hey all, I'm from Jetstack and am interested in discussing a potential move into So far, on our end, we've avoided adding DNS challenge support as we're hoping to see some work around generic DNS management in Kubernetes (to save each project having to implement it's own set of DNS providers). It seems there's some initial work with the If anyone wants to set up a call to discuss their thoughts too and work out any potential collaboration, let me know! I'll put together a proposal this week so we have something to comment on. Sorry for the radio silence on our end around this - it's been a very busy few weeks! |
@munnerz I would like to add support to KCM for creating the A/CNAME/ALIAS records to go with a certificate (previously issued with a DNS challenge). I too would rather than build it into KCM, have KCM create a k8s resource or label that e.g. Happy to take part in any call. |
Hi @munnerz and @whereisaaron - I'd also like to jump onto a call. I've got a few systems out using KCM, but would I love to see the kube community standardize on a single system that supports both modes (Ingress & TPR). |
Have we had a conversation with kube-lego yet? I'm in favor of moving this
or a derivative of it, to the incubator, but I want to only have one such
project, and not alienate kube-lego users (of which there are a lot) if we
can avoid it.
…On Mon, May 22, 2017 at 9:33 AM, Ross Kukulinski ***@***.***> wrote:
Hi @munnerz <https://github.com/munnerz> and @whereisaaron
<https://github.com/whereisaaron> - I'd also like to jump onto a call.
I've got a few systems out using KCM, but would I love to see the kube
community standardize on a single system that supports both modes (Ingress
& TPR).
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#33 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVBxsOTwGdAj_y8OGalAL8f7_Z0wtks5r8bjZgaJpZM4Lmizt>
.
|
@thockin not yet - when would be best to discuss? We're all GMT here, but I can poke people and get them on a call too. This week I am mostly free, except for Thursday. Can we tentatively say 3.30pm UTC on Wednesday (24th)? I'm happy to rearrange if it is more convenient for others. |
I have no pref, and I don't need to be there. I just want communication :)
…On Mon, May 22, 2017 at 12:30 PM, James Munnelly ***@***.***> wrote:
@thockin <https://github.com/thockin> not yet - when would be best to
discuss? We're all GMT here, but I can poke people and get them on a call
too.
This week I am mostly free, except for Thursday. Can we tentatively say
3.30pm UTC on Wednesday (24th)? I'm happy to rearrange if it is more
convenient for others.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#33 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVPDL60f_leaq0ApSRt-I3jZqQZXRks5r8eJbgaJpZM4Lmizt>
.
|
If people could make their email addresses known, either my posting here or dropping me a message at james [at] jetstack.io, I'll add you to the calendar invite! |
ross [at] heptio.com - thanks @munnerz |
luna.duclos at gmail |
cc @linki re. |
I've merged all pending PRs that had already been reviewed in preparation of this |
Is there a summary of the meeting? I unfortunately missed it and would love to find out what came out of the call. |
We've created a merged repo at https://github.com/munnerz/cert-manager, it's a copy of kube-lego to start with, and the KCM codebase will be integrated into it, and from there we can then start cooperative work by submitting PRs Once this is a bit stable, we want to run it through the kubernetes incubator |
Awesome! Is that to be a final home, or is the intent to apply for an
incubator repo?
…On Tue, May 30, 2017 at 8:47 PM, Luna Duclos ***@***.***> wrote:
We've created a merged repo at https://github.com/munnerz/cert-manager,
it's a copy of kube-lego to start with, and the KCM codebase will be
integrated into it, and from there we can then start cooperative work by
submitting PRs
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#33 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVBWYTMFrc6eZxmO2hB5B_5XuIvMGks5r_OLugaJpZM4Lmizt>
.
|
The intent is to apply for an incubator repo |
Here's a link to the cert-manager proposal: |
I am trying to find out if there has been any developmemnt progress in converging kube-lego and KCM lately. I don't see any changes over at https://github.com/munnerz/cert-manager. Is there an execution timeline? @munnerz @luna-duclos |
FYI. There is still https://github.com/kelseyhightower/kube-cert-manager which this code is loosely build upon and a new one https://github.com/tazjin/kubernetes-letsencrypt which is relative new. The later looks simple to use for me. All you need is to annotate services and run it. Wonder if it is worth to ping the main devs of those projects to see if they like to join the incubator. |
@ahmetb I've pushed up some of my initial work on adding a Certificate resource type into the project. We'll be ramping up development on this over the next few weeks, but in all honesty I've just been very busy with client work! I'll be dedicating some of my evenings/weekends to this over the coming weeks however, so you should see some more work going on! For ref., I've put in a WIP PR (with a few open questions) here: munnerz/cert-manager#5 - if you've any insight on these Q's that'd be fantastic! @ensonic thanks for bringing that second one to my attention! As far as I'm aware, the general consensus on our end is to avoid using annotations on services and instead build a Certificate resource as part of a new API group. @tazjin if you're interested in getting involved with our efforts here, please do!! |
@ensonic @munnerz My If there is interest in basing these projects on a new |
I've been working on cert-manager quite a lot over the last few days, although not on a repo publicly right now (I'll get it opened up on github in the next few days) - I've decided to start again, and have based it heavily on custom resource definitions. I've got some examples manifests together that I'm keen for people to look over, and have some of the functionality already required. I've heavily focused on flexibility, and am keen to not lock us into just acme as a certificate backend. You can look at the manifests here https://gist.github.com/munnerz/258d3bff69242e86a13a6ea307bbc418 Can I propose we use #kube-lego on the kubernetes slack team for development chat? It'd be good to have a less formal communication channel. |
I've updated my own repo with the new code I've been working on - I managed to issue my first certificate with it yesterday! It's not ready yet, but I'm starting to now open some more specific feature-request issues, and would welcome everyone else to do the same! https://github.com/munnerz/cert-manager Any feedback on structure etc. would be greatly appreciated! |
I've been working a fair bit more on this over the last couple of weeks, and have opened numerous issues against the repo in order to track progress/ask for feedback from the community. Right now, cert-manager supports HTTP01 challenges for all ingress types (we are no longer tying ourselves to ingress controller implementations) as well as google clouddns and cloudflare dns providers. The DNS provider support comes from 'borrowing' each provider from xenolf/lego, with a couple of small changes to each provider. I have stuck with the golang acme package to implement the lego provider because although the xenolf/lego implementation works very well, it doesn't allow the flexibility that cert-manager requires (eg. it does not expose a mechanism to get an authorization URI so it can be reused). I'm excited about the potential of cert-manager in the future, as it's no longer tied to ACME at all, and we should hopefully be able to implement additional Issuer types. It's worth noting that cert-manager is still in a very early state, and is not ready yet! It needs extensive testing, as well as a test suite building up and feedback from early users. Finally, the repo has moved to be under our |
Hello again! I've been making a lot of progress with cert-manager and now have built out a small e2e test suite, much better logging support (using k8s events), initial support for a plain 'ca' based issuer, ACME HTTP01 and DNS01 challenge support, as well as quite a few other features! I've got a number of quite core decisions to make over the spec of the Certificate resource (ie. cert-manager/cert-manager#86, cert-manager/cert-manager#85) that I'd like to get a bit more consensus on. If anyone has any views to add to this, be it just a +1 or a -1, I'd really appreciate your time! I've created an issue over in the cert-manager repository for the meeting. Provisionally I've said Mon 18th September @ 2pm UTC, however please comment on the issue if this time doesn't work for you and you'd like to attend! cert-manager/cert-manager#89 |
Ofcourse, moving the repo would break people's links, which I want to avoid. This issue is advanced notice that this will happen in a few months, at which point this repo will be emptied and replaced with a single README.md pointing to the new location. Is this ok for everyone ? If there are any issues with this plan, please let me know.
The text was updated successfully, but these errors were encountered: