-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Define an interface for meta-pipeline client #61
Comments
We have this already a little bit in A further step would be starting to move this client/interface out of That said, what's about avoiding a hard dependency to I guess there are two options to avoid a hard dependency to The second option is to keep the
Given that Thoughts? |
I agree that running a separate microservice and using a REST/gRCP interface is the best option to decouple from In my opinion the current @jstrachan is advocating to start directly the meta-pipeline from hook/trigger in order to have less moving parts. |
I'm hoping with some refactoring we can keep the codebase of creating the meta pipeline CRDs pretty small with minimal dependencies so it won't be too much of an issue to invoke that code directly from a single auto-scaled pod to keep the footprint of Jenkins X nice and small |
Agreed. Also my preferred choice.
+1
Agreed. It is not my preferred solution either. Mainly mentioned it for completeness. Everything is a trade off though, if we'd consider no module dependency to
Absolutely, even Lighthouse aside I would have suggested a change in the interface of the pipelinerunner. So I fully expect to change/adjust the
Sounds reasonable
I am not familiar with gRPC, so don't really know atm. |
Do you refer to the |
I don't think a |
@hferentschik its not about whether its easy to do - its whether we should do it or not. We're talking about a simple go function call to create the meta pipeline CRDs here; its better footprint wise (1 container, 1 http handler rather than 2 containers and 2 http handler) together with error handling/retry of the 2nd http call & we've only got 1 liveness/readiness check to do & don't have to deal with error handling if the 2nd container is down - plus it'll add a tiny bit of network/CPU overhead with marshalling the data. But for me the biggest reason to make it 1 simple container rather than 2 is ease of management, observablity + diagnostics for end users. With the 1 container lighthouse option with the meta pipeline code statically linked inside lighthouse; we go from webhook handler -> meta pipeline CRDs in 1 container with a smaller footprint of things that can go wrong & we can easily associate all the logrus logging context through all parts of the lighthouse/meta pipeline code. Then if a webhook doesn't result in a meta pipeline being created we have exactly 1 pod to look in for the logs; not 2 to try figure out what went wrong. It's already a huge challenge for end users trying to figure out why pipelines don't work with with prow with: hook, pipeline, pipelinerunner, tektoncontroller pods & folks randomly restarting pods to try get things working. The simplest possible thing to manage is 1 pod with 1 log so you can see if its working; if its not why and restart if needed - so there's only 1 liveness check to worry about etc. Our #1 aim is the simplest possible software thats rock solid and as simple to manage/maintain as possible with the lowest footprint/cost. If we had tons of different PlumberClient to worry about so we had lots of implementations to switch out; then sure splitting this 1 fairly simple webhook handler into 2 microservices makes sense - but given the fact we will only ever have 1 implementation of the PlumberClient any time soon; the meta factory function call - I'm preferring avoiding increasing the complexity + footprint at this time. If we need to it will be trivial to re-evaluate or even add 2 different charts/options which use embedded v remote plumber implementations. (If tekton folks really wanna remove the jx dependency we could upstream that to tekton at some later point and have a jx binary distro of lighthouse with jx inlined). If the motivation for the decoupling is worrying about go module dependency size; lets tackle that head on and make the meta factory function call have the minimal possible dependency tree by refactoring the implementation; rather than refactoring lighthouse into 2 containers/deployments |
What does it mean smaller footprint? IMO the footprint is defined in terms of CPU and memory usage. If we are considering that the current jx binary has 220MB that gets statically compiled into the Lighthouse that's a lot of memory introduced by many things which are not required by Lighthouse. So coming back to the footprint question: is it better to have a fat POD which consumes a lot of memory or 2 lighter PODs?
In a real use case scenario, there are required minimum 2 hook PODs for high availability, otherwise the user will experience downtime quite often if let's say the hook container is somehow down.
We want to avoid doing this mistake again by properly designing things instead of quick hacks. Unfortunatlly
I am totally for this but introducing a hard dependency to the whole I would like that we avoid this if possible in Lighthouse.
In a clean implementation, the client needs to be defined by the metapipeline. Lighthouse is only a consumer of this client.
Are we up to extract the pipeline and the metapipline bits in a separate repo outside of |
@ccojocar I'm not talking about introducing a hard dependency on the whole I hope we can refactor the jx repo so that the meta pipeline code can be easily consumed with modest dependencies without 200Mb of code (e.g. avoiding the option stuff which brings in pretty much all the jx dependencies) and bloating the lighthouse binary. If that doesn't work then lets try move the CRDs + associated meta pipeline code into a separate repo? The result should be a tiny single container for lighthouse that we can auto-scale from zero -> N if cost is an issue or have, say, minimum of 3 containers running all the time for HA |
All of these sounds good to me! I would like us to make this a first priority. This is why in the first place I opened this issue. I would like us to define a clean interface even though it is built-in and defined in jx repository. I hope that we all agree that the current |
totally agreed. We really need to refactor we should be able to make |
BTW a quick win could be to split up the option/factory stuff (which currently depends on knative build + all kinds of stuff) so folks can just use the jx client / kube client and no other dependencies |
even today before we refactor jx, the lighthouse binary is 101Mb - but it should shrink massively if we can move away from the options code & just reuse the real dependencies we need for meta pipeline |
I'd argue in its current form two pods/services (pipelinerunner + lighthouse) is the more lightweight option. Provided we can refactor the jx code making meta pipeline is easier to consume this might change. However, this option probably takes more time than the pipelinerunner + lighthouse option. Another reason for two services is the ability to deploy new versions of meta pipeline without impacting the Lighthouse deployment. If these two things are tightly coupled we need to re-build and re-deploy everything on each release. Last but not least, I seem to recall discussions that Lighthouse should be a "generic" webhook handler which potentially also works outside of Jenkins-X. This would definitely increase the likelihood to get a community around this.
The controller is not really a lot of code. Whatever design concerns we have, I think we can easily address them. Regarding stability, the main issue was the failing git clone commands and our intentional destruction of the pod in this cases. This should be addressed by the issue #4862 and the corresponding pull request. Looking at the DataDog logs,
This is one of me main concerns as well. Even if we extract the meta pipeline code into another package which is easier to consume, we still have to deal with all the overrides. If we are talking about taking things head one, sorting out the I am open for the merge of these two services further down the road in case we see benefits in terms of manageability, observabilty or resource consumption. However, I think the safer, cleaner and in fact quicker road to integrate meta pipeline into Lighthouse is via
I think that's something I'd like to avoid for now. That feels like a lot of work. There are several things which would need extracting. |
One way or another we should define a clean interface.
+1 |
here's a PR for removing all the cobra / CommonOptions / Options code from lighthouse - it also shrinks the binary from 101 -> 68Mb jenkins-x/jx@4843b3a |
@hferentschik here's a PR that simplifies the interface further FWIW I'm not sure there's many ways to make this 1 function, 1 argument interface much cleaner: lighthouse/pkg/plumber/interface.go Lines 4 to 5 in 835c1c1
|
It would be nice to have a client with a clean interface to start/stop the meta-pipeline. This can be defined for now in
jx
.The interface will define the required data struct and the actions which can be performed.
@hferentschik you can assign this to you if you want to work on it.
The text was updated successfully, but these errors were encountered: