-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ADR14: ArgoCD as Candidate for a Continous Deployment System #1889
base: main
Are you sure you want to change the base?
Conversation
Thanks for the writeup, I generally agree with the text and will add some remarks here.
|
Another thought: if the team is fine with skipping the attempt to keep the configuration backwards compatible (which I get the sense of with how we added redis-2) I also suggest to just use all argo controlled deployments like this and get rid of the values script workaround |
Yeah, I agree I hope it was clear that we pause further migrations until we make this smoother with the current services. If it's not clear from the text I wrote could you suggest an alteration to make it clearer?
I'm unkeen to fragment the deployment system further; that's why I didn't suggest rolling this back locally. I think we could document the ways to test stuff locally much more clearly though and ideally find some solution that feels a lot more gitops native as test solution. I suspect this means making the local clone of the git repo visible in the cluster to argo.
Yeah, for me it's pretty perfect for UI and api now. I like it a lot. a) yeah, that could be nice. We can also consider having prometheus metrics exported and add alerts on them |
I'd really like to try and have this closed and merged in a somewhat timely manner; ideally we could collect all feedback and make any alterations to the ADR text to merge by Jan 10th. I was particularly wondering if @deer-wmde and @AndrewKostka wanted add/adjust anything as people who'd already expressed opinions/worked on the topic |
- Local development and testing of ArgoCD deployed services is less trivial than with helmfile | ||
- Releasing and using new helm charts for services we manage has caused engineers higher friction than helmfile | ||
- Deploying new services required some onboarding even for senior engineers | ||
- Generation of values files for the migrated redis service was confusing to engineers who didn't build this system |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These should be included in the negative consequences section.
|
||
We have experienced: | ||
- Deploying new images of existing services has worked smoothly and been relatively transparent to engineers | ||
- A sustained low cycle time and increased number of deployments for the migrated services |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this was more correlation than causation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We had an increase in UI deployments because we had a bunch of smaller UI tasks.
Bug: T377082