You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dec 4 19:17:05.724 [ERRO] [id:alpine-952967c2-cd5b-46ef-ae72-b6002fc47089-0] [type:networkService] (4.4) Error returned from api/pkg/api/networkservice/networkServiceClient.Request: rpc error: code = Unknown desc = 0. An error during select forwawrder forwarder-vpp-hfpxl --> Error returned from sdk/pkg/networkservice/common/connect/connectClient.Request: rpc error: code = Unknown desc = 0. An error during select endpoint [email protected] --> Error returned from sdk/pkg/networkservice/common/connect/connectClient.Request: rpc error: code = Unknown desc = DNS address is initializing: cannot support any of the requested mechanism: all candidates have failed: all forwarders have failed: cannot support any of the requested mechanism
Solving the Problem
After debuging the environment I identify that in the cluster3 the vl3-ipam does not have the EXTERNAL-IP associated
kubectl get Service -n ns-floating-vl3-basic --context=kind-cluster3
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
vl3-ipam LoadBalancer 10.96.13.218 5006:30397/TCP 14m
And with that the cmd-nsc-init pod in NSC1 was able to connect to the vl3-ipam service in execution in cluster3 to get the IP thus creating the new interface (NIC - nsm1) in the alpine pod.
Info About the Environment
=================
CREATING CLUSTERS
=================
Creating cluster "cluster1" ...
✓ Ensuring node image (kindest/node:v1.31.2) 🖼
✓ Preparing nodes 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-cluster1"
You can now use your cluster with:
kubectl cluster-info --context kind-cluster1
Have a nice day! 👋
+==============================+
| Cluster cluster1 is ready :) |
+==============================+
Creating cluster "cluster2" ...
✓ Ensuring node image (kindest/node:v1.31.2) 🖼
✓ Preparing nodes 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-cluster2"
You can now use your cluster with:
kubectl cluster-info --context kind-cluster2
Thanks for using kind! 😊
+==============================+
| Cluster cluster2 is ready :) |
+==============================+
Creating cluster "cluster3" ...
✓ Ensuring node image (kindest/node:v1.31.2) 🖼
✓ Preparing nodes 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-cluster3"
You can now use your cluster with:
kubectl cluster-info --context kind-cluster3
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
+==============================+
| Cluster cluster3 is ready :) |
+==============================+
The text was updated successfully, but these errors were encountered:
Expected Behavior
After following the use-case
https://github.com/networkservicemesh/deployments-k8s/tree/main/examples/interdomain/usecases/floating_vl3-basic
And installing in the cluster3 the registry and the vl3-ipam components
And after installing the components In the cluster1 the NSC1 (alpine pod) not initialized.
Failure Information (for bugs)
The logs of the cmd-nsc-init shows
Solving the Problem
After debuging the environment I identify that in the cluster3 the vl3-ipam does not have the EXTERNAL-IP associated
This process shoud be done the file
https://github.com/networkservicemesh/deployments-k8s/blob/main/examples/interdomain/usecases/floating_vl3-basic/cluster3/patch-ipam-service.yaml
however, for some reason even with the execution of this file the LoadBalancer was not involked.
To solve this problem i need to create the vl3-ipam service already configured to use the LoadBalancer
And with that the cmd-nsc-init pod in NSC1 was able to connect to the vl3-ipam service in execution in cluster3 to get the IP thus creating the new interface (NIC - nsm1) in the alpine pod.
Info About the Environment
The text was updated successfully, but these errors were encountered: