Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Decide where to move this repo to now that PSG has closed down #33

Open
luna-duclos opened this issue Jan 18, 2017 · 55 comments
Open

Decide where to move this repo to now that PSG has closed down #33

luna-duclos opened this issue Jan 18, 2017 · 55 comments

Comments

@luna-duclos
Copy link

Ofcourse, moving the repo would break people's links, which I want to avoid. This issue is advanced notice that this will happen in a few months, at which point this repo will be emptied and replaced with a single README.md pointing to the new location. Is this ok for everyone ? If there are any issues with this plan, please let me know.

@whereisaaron
Copy link
Contributor

Do you have a new location in mind? I guess we should change the default label/annotation prefix. Even if we change the default, we could make it configurable, so people with existing clusters can keep the old one in service.

@whereisaaron
Copy link
Contributor

In preparation for this I added options to support overriding the annotation/label prefix and the Certificate namespace to the #30 'class' branch. So if we move and update the code to a new domain, people with existing clusters can run a backward compatible version by including these options.

-cert-namespace="stable.k8s.psg.io" -tag-prefix="stable.k8s.psg.io/kcm."

Or alternatively you can include your own custom domain and use the improved class label approach with options like:

-cert-namespace="k8s.example.com" -tag-prefix="kcm.k8s.example.com" -class="default"

@euank
Copy link
Contributor

euank commented Mar 24, 2017

@luna-duclos If the repo is remaining on github, rather than replacing this one with a README, the 'Transfer ownership' button could be used to create a redirect.

Redirects are more friendly, so if that works, I'd much prefer that.

@euank
Copy link
Contributor

euank commented Mar 27, 2017

Would this fit as a Kubernetes Incubator project?

This seems like the sorta thing that would fit there due to how closely related to the Ingress apis it is, and also because over time it would be nice to standardize Certificate TPRs among any other things that need them (e.g. ingress controllers themselves actually).

@luna-duclos
Copy link
Author

That's a good question, I'll go over the spec and if it seems a good match, poke some people to see if they also think it's a good idea.

@paultiplady
Copy link

I believe you can transfer a repo without breaking folks' links: https://help.github.com/articles/about-repository-transfers/

(Github automatically proxies when you rename a repo, I suspect it does when you transfer too).

Definitely worth at least suggesting this as an incubator project if it's going to continue to be worked on. Note that there is another project doing ingress certs (https://github.com/jetstack/kube-lego), but that one is tightly coupled to the ingress implementation, unlike this project.

Tim Hockin recently sponsored a networking incubator project (https://github.com/kubernetes-incubator/external-dns), he might be a good person to approach for guidance here -- I'll delay @-mentioning him in case you have someone else you'd prefer to reach out to.

@luna-duclos
Copy link
Author

I think moving this to the kubernetes incubator makes more sense than me moving it to my personal github namespace. @paultiplady @euank, who would you two suggest as initial points of contact for this ?

@luna-duclos luna-duclos changed the title PSG is no more, we should probably move this repo at some point Decide where to move this repo to now that PSG has closed down Apr 16, 2017
@paultiplady
Copy link

@thockin -- this project provides a simple way to generate LetsEncrypt certs in a k8s cluster, any guidance on whether it would be suitable for an incubator project?

It's complementary to kube-lego as it doesn't assume an Ingress, so it enables TLS-to-the-pod.

@thockin
Copy link

thockin commented Apr 19, 2017 via email

@wenchenglu
Copy link

Hi, Folks,

We are working on providing strong authentication for service to service communication on k8s (https://github.com/istio/auth). And we also planned to cover enduser to service authentication in the near future. Having a system to auto manage certificate is very useful to us. We can share our design doc if you are interested in more details. Happy to chat with you how our work can benefit each other and if your work fits in istio org: https://github.com/istio

Cheers! Wencheng

@paultiplady
Copy link

Hey Tim!

I can vouch for this project being active, @luna-duclos has been very responsive with the couple of issues I've raised recently.

I think this does offer something that would be a useful primitive in the k8s feature-set, and being concerned with encryption key material, is quite sensitive; perhaps there is some benefit to making it somewhat-official in that regard.

I'll defer to Luna on the rest -- in particular folding this into another project could be a good outcome (though I quite like that it's a small component that's easy to reason about).

@thockin
Copy link

thockin commented Apr 20, 2017 via email

@mlaccetti
Copy link

Adding to @thockin - kube-cert-manager also does DNS-based checks, which kube-lego doesn't (or didn't, when I last looked), which is important for certain internal-facing systems. We still want TLS, and we don't want to have to punch a hole to the world just to do so!

@luna-duclos
Copy link
Author

@thockin: I almost think this project as-is is mature and stable enough to move to the kubernetes namespace.
As for criteria for graduation, a few things are currently lacking. those would be: Automated builds, CI and unit tests. With those done and the currently open PRs merged, I believe this project to be quite mature, stable, and useful.
kube-lego doesn't quite fit the scope of this project very well, as this is, as you've noticed, aimed at TLS for pods as well as ingresses, rather than being aimed at just ingress.
I don't currently have an org for kcm, hence why I think the k8s org might be a good fit.

@luna-duclos
Copy link
Author

@wlu2016 thanks for the note! I'll have a look at istio.

@euank
Copy link
Contributor

euank commented Apr 20, 2017

@wlu2016 you might be misunderstanding the scope of this project. service<->service communication will likely need an internally trusted and managed ca.
This project is for interacting with lets-encrypt and only currently useful for creating certificates associated with public domain names.

@euank
Copy link
Contributor

euank commented Apr 20, 2017

My 2c for @thockin's questions:

Is this abandoned or actively developed?

Actively developed, fairly low velocity.

Can it be folded into kube-lego or vice-versa?

Potentially. From my viewpoint, this into kube-lego might make sense, if only because they have more popularity (at least by github stars).
It's not my call to make though.

I also haven't looked at kube-lego enough to be certain it matches up well. Their codebase looks nice at a glance, and I suppose copying over the TPR and a few other things would effectively fold this in, but it's possible it would be easier said than done and no clue what they'd think about it (cc @simonswine)

What would be the criteria for "graduation" ?

This being deployed as an addon alongside ingress by default probably.

Does it actually benefit from being in Kubernetes orgs, vs being in a dedicated org or somewhere else?

That's a tossup in my mind.
Convergence / discoverability is nice, especially when there's TPRs involved.
The TPRs here are sugar over secrets so it's a little less important, but still worth noting.

I don't think the incubator project has been going long enough to really be certain of the tradeoffs, and I can't find enough ground to stand on in either way to form a strong opinion.

@wenchenglu
Copy link

@euank, there are several scenarios we (Istio) need certificate associated with public domain names. For example, end user traffic (from browser and mobile app), ingress, services behind GCLB, etc.

@rosskukulinski
Copy link

FWIW, @thockin, I've had this in production for a few clients for 3-4 months now. The DNS verification and TPRs were the essential features for our use cases.

@luna-duclos has been very responsive and there's an active community around here.

@whereisaaron
Copy link
Contributor

whereisaaron commented Apr 21, 2017

Hi @thockin we originally used kube-lego and later switched to kube-cert-manager. Frankly kcm is more capable and a lot easier to deploy and manage that kube-lego (was - it may well have improved since I used it). We use kcm for both DNS challenges (multiple DNS API accounts) and HTTP challenges (for client domains we don't control). I contributed a 'class' label system analogous to the the nginx Ingress Controller 'class' annotation. Kcm been working great for us in production - it's a no drama service.

The DNS challenges allow us to manage certificates for services that have no Ingress. We can issue certs with kcm for cluster-hosted SMTP and LDAP servers, and for cluster-internal Services that have no Ingress.

Since the addition of support for 'class' labels and ability to issue SAN certificates, we consider kcm feature complete for us. So if incubating, it would be more after refining the code base and adding tests. I looked at kube-lego recently and the only feature I'd like to port from there that kcm is perhaps missing is using a dynamically updated Ingress to route HTTP challenges (#42), without having to touch the Ingress Controller config. Currently I added a global routing for ACME challenge requests to kcm in the Ingress Controllers configs.

A strength of kcm vs kube-lego is that use narrows API watchs by 'class' label as well as namespace(s). This means each kcm instance watches only changes to the Ingress and TPR Certificate resources that instance manages. On a very large cluster you can use a per-project or per-DNS-account kcm instances and not waste time/overhead examining every Ingress/Certificate change. Projects like kube-lego and the Ingress Controllers have a tendency to use annotations and thus every instance of the service processes every Ingress change in the entire, potentially 5000 node, cluster.

Regards automated builds, I set that up for myself using AWS CodeBuild and contributed it (in the codebuild folder). It just need a trigger added. There's a CloudBuild PR incoming. We could also use Travis if that is preferred.

After hyping the project a bit above, my criticism of this project is that it lacks an automated test suite. If incubated I'd like to see tests added with decent code coverage. A bit of mild refactoring wouldn't hurt it either - though the code base is actually very small if you look at it, as the lego library does most the hard work.

@luna-duclos reviewed my PR's in a timely manner plus insisted on documentation updates first, so I'm happy 👍

@thockin
Copy link

thockin commented Apr 28, 2017 via email

@whereisaaron
Copy link
Contributor

whereisaaron commented May 2, 2017

Does this work for Google Cloud LB, for example?

@thockin are you asking if KCM requires an Ingress Controller like kube-lego does, or can KCM also work with LoadBalancers and no Ingress?

No KCM does not require an Ingress Controller. Yes, KCM will work fine with Google Cloud LB's.

Firstly you can use DNS challenges, which is the primary method used by KCM and don't require an Ingress or LoadBalancer or any incoming Internet access at all. This is my preferred method.

Second, if you need to use an HTTP challenge, e.g. for a domain name where you don't control the DNS zone, KCM supports LoadBalancers or Ingresses or NodeIPs or HostIPs, the only requirements for HTTP challenges is that:

  • The certificate target domain name(s) resolve to a public IP (KCM doesn't need to know what that IP is), and
  • HTTP requests for the '.well-known/acme-challenge' path route to the KCM Service or Pod

(To be fair, even though kubo-lego only officially works with Ingresses, I think you could use a dummy Ingress and route the requests directly to the kube-lego Service or Pod and that would still work.)

@thockin
Copy link

thockin commented May 2, 2017 via email

@whereisaaron
Copy link
Contributor

Thanks @thockin I would welcome an eventual merged or successor project. My impression is the kube-lego people only concern themselves with tight integration with a single Ingress Controller. So a unified effort depends on their community willingness to make Ingress support as just one feature of a certificate service.

On a maturity basis, you would thing that KCM's features should be ported to kube-lego as they have had more people and time to make a mature product and have much better test coverage. However, on a feature basis, there is only the one Ingress-handling feature to port from kube-lego for KCM entirely cover the kube-lego feature set, then KCM provides a laundry list of other important (IMHO) features and performance benefits on top.

In my view a successor project for both may be better to reformulate the feature set. I feel what is needed is to look at what both projects have achieved as basically mature, and then more on to:

  1. A successor or extension to the Lego library (a key part of both current projects) to support creating/delete A/CNAME records in all the supported DNS providers, rather than just the current support for challenge TXT records. (This would enable k8s deployments to entirely automate DNS+Certificate+Ingress creation from k8s managed resources.)
  2. A clean way to store multiple DNS/cloud provider account credentials in k8s Secrets that can be discovered and used by the certificate service, but also other k8s sevices (this could be an independent single-task service or just a TPR or Secret specification)
  3. A heavily refactored or re-assembled kube-lego+kcm project that supports all ACME challenge types equally well (Definitely DNS and HTTPS+SNI, and maybe HTTP but that could be deprecated at this stage)

@nanliu
Copy link
Contributor

nanliu commented May 2, 2017

@whereisaaron, so after deploying kcm, we immediately ran into the issue how to update dns records. At first I thought the logical next step is to file an issue in xenolf/lego since it only manages TXT records. But after looking around a bit more, Kubernetes incubator already have a project to tackle the dns issue (https://github.com/kubernetes-incubator/external-dns). external-dns is trying to replace several existing projects (kops dns controller/Mate/route53-kubenetes). I would hate to have multiple projects reinvent the wheel over the same issue, and each of them support a different subset of DNS providers.

@mlaccetti
Copy link

@luna-duclos sorry to hear that - I'll see if there is a template for the proposal and see if I can get something started.

@munnerz
Copy link

munnerz commented May 18, 2017

Hey all, I'm from Jetstack and am interested in discussing a potential move into kubernetes-incubator. From our end, we're happy to see it happen if it's possible in order to drive engagement and hopefully push the project forward to stabilise it.

So far, on our end, we've avoided adding DNS challenge support as we're hoping to see some work around generic DNS management in Kubernetes (to save each project having to implement it's own set of DNS providers). It seems there's some initial work with the external-dns project, although I'm unsure whether that can currently be utilised to create arbitrary records as the ACME protocol requires.

If anyone wants to set up a call to discuss their thoughts too and work out any potential collaboration, let me know! I'll put together a proposal this week so we have something to comment on.

Sorry for the radio silence on our end around this - it's been a very busy few weeks!

@whereisaaron
Copy link
Contributor

@munnerz I would like to add support to KCM for creating the A/CNAME/ALIAS records to go with a certificate (previously issued with a DNS challenge). I too would rather than build it into KCM, have KCM create a k8s resource or label that e.g. external-dns watches for and creates the A/CNAME/ALIAS record for. My bias is I think it is a mistake and a barrier to scaling that external-dns is using annotations, that aren't indexed and can't be filtered or watched for, rather than labels.

Happy to take part in any call.

@rosskukulinski
Copy link

Hi @munnerz and @whereisaaron - I'd also like to jump onto a call. I've got a few systems out using KCM, but would I love to see the kube community standardize on a single system that supports both modes (Ingress & TPR).

@thockin
Copy link

thockin commented May 22, 2017 via email

@munnerz
Copy link

munnerz commented May 22, 2017

@thockin not yet - when would be best to discuss? We're all GMT here, but I can poke people and get them on a call too.

This week I am mostly free, except for Thursday. Can we tentatively say 3.30pm UTC on Wednesday (24th)? I'm happy to rearrange if it is more convenient for others.

@thockin
Copy link

thockin commented May 22, 2017 via email

@munnerz
Copy link

munnerz commented May 22, 2017

If people could make their email addresses known, either my posting here or dropping me a message at james [at] jetstack.io, I'll add you to the calendar invite!

@rosskukulinski
Copy link

ross [at] heptio.com - thanks @munnerz

@luna-duclos
Copy link
Author

luna.duclos at gmail

@unguiculus
Copy link

cc @linki re. external-dns

@luna-duclos
Copy link
Author

I've merged all pending PRs that had already been reviewed in preparation of this

@ahmetb
Copy link

ahmetb commented May 30, 2017

Is there a summary of the meeting? I unfortunately missed it and would love to find out what came out of the call.

@luna-duclos
Copy link
Author

luna-duclos commented May 31, 2017

We've created a merged repo at https://github.com/munnerz/cert-manager, it's a copy of kube-lego to start with, and the KCM codebase will be integrated into it, and from there we can then start cooperative work by submitting PRs

Once this is a bit stable, we want to run it through the kubernetes incubator

@thockin
Copy link

thockin commented May 31, 2017 via email

@luna-duclos
Copy link
Author

The intent is to apply for an incubator repo

@nanliu
Copy link
Contributor

nanliu commented May 31, 2017

@ahmetb
Copy link

ahmetb commented Jun 21, 2017

I am trying to find out if there has been any developmemnt progress in converging kube-lego and KCM lately. I don't see any changes over at https://github.com/munnerz/cert-manager. Is there an execution timeline? @munnerz @luna-duclos

@ensonic
Copy link

ensonic commented Jun 29, 2017

FYI. There is still https://github.com/kelseyhightower/kube-cert-manager which this code is loosely build upon and a new one https://github.com/tazjin/kubernetes-letsencrypt which is relative new. The later looks simple to use for me. All you need is to annotate services and run it. Wonder if it is worth to ping the main devs of those projects to see if they like to join the incubator.

@munnerz
Copy link

munnerz commented Jun 30, 2017

@ahmetb I've pushed up some of my initial work on adding a Certificate resource type into the project. We'll be ramping up development on this over the next few weeks, but in all honesty I've just been very busy with client work!

I'll be dedicating some of my evenings/weekends to this over the coming weeks however, so you should see some more work going on!

For ref., I've put in a WIP PR (with a few open questions) here: munnerz/cert-manager#5 - if you've any insight on these Q's that'd be fantastic!

@ensonic thanks for bringing that second one to my attention! As far as I'm aware, the general consensus on our end is to avoid using annotations on services and instead build a Certificate resource as part of a new API group. @tazjin if you're interested in getting involved with our efforts here, please do!!

@tazjin
Copy link

tazjin commented Jun 30, 2017

@ensonic @munnerz My kubernetes-letsencrypt is slightly different in that it relies only on DNS challenges and not HTTP challenges.

If there is interest in basing these projects on a new TRP CRD it'd be cool if we could share that resource definition across multiple projects. I'm planning support for that in my project.

@munnerz
Copy link

munnerz commented Jul 21, 2017

I've been working on cert-manager quite a lot over the last few days, although not on a repo publicly right now (I'll get it opened up on github in the next few days) - I've decided to start again, and have based it heavily on custom resource definitions.

I've got some examples manifests together that I'm keen for people to look over, and have some of the functionality already required. I've heavily focused on flexibility, and am keen to not lock us into just acme as a certificate backend. You can look at the manifests here https://gist.github.com/munnerz/258d3bff69242e86a13a6ea307bbc418

Can I propose we use #kube-lego on the kubernetes slack team for development chat? It'd be good to have a less formal communication channel.

@munnerz
Copy link

munnerz commented Jul 22, 2017

I've updated my own repo with the new code I've been working on -

I managed to issue my first certificate with it yesterday! It's not ready yet, but I'm starting to now open some more specific feature-request issues, and would welcome everyone else to do the same! https://github.com/munnerz/cert-manager

Any feedback on structure etc. would be greatly appreciated!

@munnerz
Copy link

munnerz commented Aug 8, 2017

I've been working a fair bit more on this over the last couple of weeks, and have opened numerous issues against the repo in order to track progress/ask for feedback from the community.

Right now, cert-manager supports HTTP01 challenges for all ingress types (we are no longer tying ourselves to ingress controller implementations) as well as google clouddns and cloudflare dns providers. The DNS provider support comes from 'borrowing' each provider from xenolf/lego, with a couple of small changes to each provider. I have stuck with the golang acme package to implement the lego provider because although the xenolf/lego implementation works very well, it doesn't allow the flexibility that cert-manager requires (eg. it does not expose a mechanism to get an authorization URI so it can be reused).

I'm excited about the potential of cert-manager in the future, as it's no longer tied to ACME at all, and we should hopefully be able to implement additional Issuer types.

It's worth noting that cert-manager is still in a very early state, and is not ready yet! It needs extensive testing, as well as a test suite building up and feedback from early users.

Finally, the repo has moved to be under our jetstack-experimental organisation, although that should be evident as you'll be redirected when visiting mine 😄 https://github.com/jetstack-experimental/cert-manager.

@munnerz
Copy link

munnerz commented Sep 11, 2017

Hello again!

I've been making a lot of progress with cert-manager and now have built out a small e2e test suite, much better logging support (using k8s events), initial support for a plain 'ca' based issuer, ACME HTTP01 and DNS01 challenge support, as well as quite a few other features!

I've got a number of quite core decisions to make over the spec of the Certificate resource (ie. cert-manager/cert-manager#86, cert-manager/cert-manager#85) that I'd like to get a bit more consensus on.

If anyone has any views to add to this, be it just a +1 or a -1, I'd really appreciate your time!

I've created an issue over in the cert-manager repository for the meeting. Provisionally I've said Mon 18th September @ 2pm UTC, however please comment on the issue if this time doesn't work for you and you'd like to attend! cert-manager/cert-manager#89

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests