-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(cli): reduce redundancy on context to cluster flags in command deployment create
#1156
base: main
Are you sure you want to change the base?
feat(cli): reduce redundancy on context to cluster flags in command deployment create
#1156
Conversation
…data from flag.deploymentClusters and parsed data inside local config Signed-off-by: instamenta <[email protected]>
Coverage summary from CodacySee diff coverage on Codacy
Coverage variation details
Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: Diff coverage details
Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: See your quality gate settings Change summary preferencesCodacy stopped sending the deprecated coverage status on June 5th, 2024. Learn more |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1156 +/- ##
==========================================
+ Coverage 82.66% 82.72% +0.05%
==========================================
Files 77 77
Lines 21440 21398 -42
Branches 1914 1402 -512
==========================================
- Hits 17724 17701 -23
- Misses 3593 3621 +28
+ Partials 123 76 -47
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some changes:
- check and make sure that K8s
testClusterConnection
validates that the context exists in the kube config prior to setting it - put K8s
testClusterConnection
logic in a try/catch block, and if there is a failure revert the context back to its original value.
In this case:
npm run solo -- deployment create -n jeromy --email [email protected] --deployment-clusters solo-e2e
the cluster solo-e2e
is what solo will use as an alias to map to a context kind-solo-e2e
which I supplied when it prompted me.
The local-config.yaml looks good:
❯ cat local-config.yaml
userEmailAddress: [email protected]
deployments:
jeromy:
clusters:
- solo-e2e
currentDeploymentName: jeromy
clusterContextMapping:
solo-e2e: kind-solo-e2e
currently, the testClusterConnection
fails because solo-e2e
isn't a valid context, but it has already updated the kube current context. So, when the program attempts to exit, the release lease fails because k8 is a singleton, and still pointing to context = solo-e2e.
✔ Initialize
✔ Acquire lease - lease acquired successfully, attempt: 1/10
↓ Prompt local configuration
❯ Validate cluster connections
✖ No active cluster!
◼ Create remote config
*********************************** ERROR *****************************************
failed to read existing leases, unexpected server response of '500' received
***********************************************************************************
…cy-on-context-to-cluster-flags-in-solo-deployment-create # Conflicts: # src/commands/deployment.ts
Signed-off-by: instamenta <[email protected]>
…cy-on-context-to-cluster-flags-in-solo-deployment-create # Conflicts: # src/commands/deployment.ts # src/core/k8.ts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think maybe something got broke during the merge conflicts, or I missed something, or I started doing something wrong. I'm now getting an error when I try to do this:
❯ kind delete cluster -n solo
Deleting cluster "solo" ...
Deleted nodes: ["solo-control-plane"]
❯ kind create cluster -n solo
Creating cluster "solo" ...
✓ Ensuring node image (kindest/node:v1.27.3) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-solo"
You can now use your cluster with:
kubectl cluster-info --context kind-solo
Have a nice day! 👋
❯ rm -Rf ~/.solo
❯ npm run solo -- init
> @hashgraph/[email protected] solo
> node --no-deprecation --no-warnings dist/solo.js init
******************************* Solo *********************************************
Version : 0.33.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
**********************************************************************************
✔ Setup home directory and cache
✔ Check dependencies [3s]
✔ Check dependency: helm [OS: darwin, Release: 23.6.0, Arch: arm64] [3s]
✔ Setup chart manager [1s]
✔ Copy templates in '/Users/user/.solo/cache'
***************************************************************************************
Note: solo stores various artifacts (config, logs, keys etc.) in its home directory: /Users/user/.solo
If a full reset is needed, delete the directory or relevant sub-directories before running 'solo init'.
***************************************************************************************
❯ npm run solo -- deployment create -n jeromy-solo --deployment-clusters kind-solo
> @hashgraph/[email protected] solo
> node --no-deprecation --no-warnings dist/solo.js deployment create -n jeromy-solo --deployment-clusters kind-solo
******************************* Solo *********************************************
Version : 0.33.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Kubernetes Namespace : jeromy-solo
**********************************************************************************
✔ Initialize
✔ Setup home directory
✔ Prompt local configuration [6s]
✖ Context kind-solo is not valid for cluster
◼ Validate context
◼ Update local configuration
◼ Validate cluster connections
◼ Create remoteConfig in clusters
*********************************** ERROR *****************************************
Error installing chart solo-deployment
***********************************************************************************
❯ cat ~/.solo/cache/local-config.yaml
userEmailAddress: [email protected]
deployments:
jeromy-solo:
clusters:
- kind-solo
currentDeploymentName: jeromy-solo
clusterContextMapping:
kind-solo: kind-solo
…cy-on-context-to-cluster-flags-in-solo-deployment-create
…cy-on-context-to-cluster-flags-in-solo-deployment-create # Conflicts: # src/commands/deployment.ts
Signed-off-by: instamenta <[email protected]>
Signed-off-by: instamenta <[email protected]>
…cluster select Signed-off-by: instamenta <[email protected]>
Description
Removes flag
--context-cluster
and it's usages, replaces them with data provided from local configdeployments.clusters
.Related Issues