Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gke pods not able to ping the gce's when using calico #2813

Closed
ravitejb opened this issue Aug 22, 2019 · 4 comments
Closed

gke pods not able to ping the gce's when using calico #2813

ravitejb opened this issue Aug 22, 2019 · 4 comments
Assignees

Comments

@ravitejb
Copy link

gke pods are not able to ping or connect to the gce's inside the same VPC network when cluster is created by enabling the Network policy

Expected Behavior

after allowing all the egress and ingress it should connect/ping to the gce's or ip's

Current Behavior

unable to connect to the gce's in the same VPC network, but connecting to all gce's or public IP's out side that VPC network

Possible Solution

Steps to Reproduce (for bugs)

  1. Create a GKE cluster with Network policy enabled.
  2. Deploy an nginx pod and try connecting to any ip in the same VPC network from NGINX and it fails. But successfully connects to any ip's outside the VPC network
  3. Tried creating another cluster with Network policy disabled in the same VPC network and from that cluster it is successfully connecting to the GCE's or IP's in the same VPC network (Issue is not related to firewall)

Your Environment

  • Calico version - gcr.io/projectcalico-org/node:v3.2.7
  • Orchestrator version - kubernetes
  • Operating System and version:
@lmm
Copy link
Contributor

lmm commented Aug 26, 2019

@ravitejb did you apply a network policy to the GKE cluster that had network policy enabled?

On a GKE cluster, out of the box, pods should be able to reach other pods without issue.

@ravitejb
Copy link
Author

@lmm I've tried below 3 different scnarios.
Case 1: Just enabled Network policy and tried pinging the GCE's inside the VPC - Failed
Case 2: Enabled Network policy and applied ingress and egress allow all and tried pinging the GCE's inside the VPC - Failed
Case 3: Disabled Network policy and tried pinging the GCE's inside VPC - Success

@lmm
Copy link
Contributor

lmm commented Oct 9, 2019

Hey @ravitejb , sorry for the delay. Did you get this figured out? I just tried your first scenario and I am able to ping a node from within a pod:

laurence@osxt ~
❯ kubectl get po -owide
NAME                     READY   STATUS    RESTARTS   AGE     IP          NODE                                       NOMINATED NODE   READINESS GATES
alpine-cc867f6f8-mj9xh   1/1     Running   0          7m59s   10.28.1.6   gke-laurence1-default-pool-7aea066a-k2m5   <none>           <none>

laurence@osxt ~
❯ kubectl get nodes -owide
NAME                                       STATUS   ROLES    AGE   VERSION          INTERNAL-IP   EXTERNAL-IP     OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME
gke-laurence1-default-pool-7aea066a-k2m5   Ready    <none>   24m   v1.13.7-gke.24   10.128.0.92   34.70.213.89    Ubuntu 18.04.2 LTS   4.15.0-1034-gke   docker://18.9.3
gke-laurence1-default-pool-7aea066a-mr75   Ready    <none>   24m   v1.13.7-gke.24   10.128.0.91   35.192.128.20   Ubuntu 18.04.2 LTS   4.15.0-1034-gke   docker://18.9.3
gke-laurence1-default-pool-7aea066a-wsqw   Ready    <none>   24m   v1.13.7-gke.24   10.128.0.72   35.184.29.14    Ubuntu 18.04.2 LTS   4.15.0-1034-gke   docker://18.9.3

laurence@osxt ~
❯ kubectl exec alpine-cc867f6f8-mj9xh ping 10.128.0.92
PING 10.128.0.92 (10.128.0.92): 56 data bytes
64 bytes from 10.128.0.92: seq=0 ttl=64 time=0.111 ms
64 bytes from 10.128.0.92: seq=1 ttl=64 time=0.115 ms

@rafaelvanoni
Copy link
Contributor

Closing due to inactivity, if this is still an issue please add a comment and we'll re-open it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants