Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vl3-ipam service without CLUSTER-IP in the interdomain usecases floating_vl3-basic #12589

Open
anselmobattisti opened this issue Dec 4, 2024 · 0 comments

Comments

@anselmobattisti
Copy link
Contributor

Expected Behavior

After following the use-case

https://github.com/networkservicemesh/deployments-k8s/tree/main/examples/interdomain/usecases/floating_vl3-basic

And installing in the cluster3 the registry and the vl3-ipam components

And after installing the components In the cluster1 the NSC1 (alpine pod) not initialized.

Failure Information (for bugs)

The logs of the cmd-nsc-init shows

kubectl logs alpine -n ns-floating-vl3-basic --context=kind-cluster1 -c cmd-nsc-init  --context=kind-cluster1
Dec  4 19:17:05.724 [ERRO] [id:alpine-952967c2-cd5b-46ef-ae72-b6002fc47089-0] [type:networkService] (4.4)      Error returned from api/pkg/api/networkservice/networkServiceClient.Request: rpc error: code = Unknown desc = 0. An error during select forwawrder forwarder-vpp-hfpxl --> Error returned from sdk/pkg/networkservice/common/connect/connectClient.Request: rpc error: code = Unknown desc = 0. An error during select endpoint [email protected] --> Error returned from sdk/pkg/networkservice/common/connect/connectClient.Request: rpc error: code = Unknown desc = DNS address is initializing: cannot support any of the requested mechanism: all candidates have failed: all forwarders have failed: cannot support any of the requested mechanism

Solving the Problem

After debuging the environment I identify that in the cluster3 the vl3-ipam does not have the EXTERNAL-IP associated

kubectl get Service -n ns-floating-vl3-basic --context=kind-cluster3
NAME       TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)          AGE
vl3-ipam   LoadBalancer   10.96.13.218                               5006:30397/TCP   14m

This process shoud be done the file

https://github.com/networkservicemesh/deployments-k8s/blob/main/examples/interdomain/usecases/floating_vl3-basic/cluster3/patch-ipam-service.yaml

image

however, for some reason even with the execution of this file the LoadBalancer was not involked.

To solve this problem i need to create the vl3-ipam service already configured to use the LoadBalancer

---
apiVersion: v1
kind: Service
metadata:
  name: vl3-ipam
spec:
  selector:
    app: vl3-ipam
  ports:
    - name: vl3-ipam
      protocol: TCP
      port: 5006
      targetPort: 5006
  type: LoadBalancer

And with that the cmd-nsc-init pod in NSC1 was able to connect to the vl3-ipam service in execution in cluster3 to get the IP thus creating the new interface (NIC - nsm1) in the alpine pod.

Info About the Environment

=================
CREATING CLUSTERS
=================
Creating cluster "cluster1" ...
 ✓ Ensuring node image (kindest/node:v1.31.2) 🖼
 ✓ Preparing nodes 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
 ✓ Joining worker nodes 🚜 
Set kubectl context to "kind-cluster1"
You can now use your cluster with:

kubectl cluster-info --context kind-cluster1

Have a nice day! 👋
+==============================+
| Cluster cluster1 is ready :) |
+==============================+
Creating cluster "cluster2" ...
 ✓ Ensuring node image (kindest/node:v1.31.2) 🖼
 ✓ Preparing nodes 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
 ✓ Joining worker nodes 🚜 
Set kubectl context to "kind-cluster2"
You can now use your cluster with:

kubectl cluster-info --context kind-cluster2

Thanks for using kind! 😊
+==============================+
| Cluster cluster2 is ready :) |
+==============================+
Creating cluster "cluster3" ...
 ✓ Ensuring node image (kindest/node:v1.31.2) 🖼
 ✓ Preparing nodes 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
 ✓ Joining worker nodes 🚜 
Set kubectl context to "kind-cluster3"
You can now use your cluster with:

kubectl cluster-info --context kind-cluster3

Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
+==============================+
| Cluster cluster3 is ready :) |
+==============================+
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant