Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calico allocate ip 10.254.0.0 to a pod #3825

Closed
pfwang80s opened this issue Jul 23, 2020 · 7 comments
Closed

Calico allocate ip 10.254.0.0 to a pod #3825

pfwang80s opened this issue Jul 23, 2020 · 7 comments

Comments

@pfwang80s
Copy link

10.254.0.0 is an invalid ip address

Expected Behavior

allocate right ip address to pods.

Current Behavior

allocate invalid ip 10.254.0.0 to a pod

Possible Solution

Steps to Reproduce (for bugs)

  1. install kubernetes with kubeadm with

kubeadm init --pod-network-cidr=10.128.0.0/9 --service-cidr=10.96.0.0/24 --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version=v1.18.3

  1. kubectl create -f https://docs.projectcalico.org/manifests/calico.yaml
  2. join worker nodes with kubeadm join, then label nodes with rack=0/1/2/3 relatively.
  3. delete the default ipv4 pool
  4. create new pools:
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
  name: core-ippool
spec:
  blockSize: 26
  cidr: 10.128.1.0/24
  ipipMode: Always
  natOutgoing: true
  vxlanMode: Never
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
  name: test-ippool
spec:
  blockSize: 24
  cidr: 10.254.0.0/24
  ipipMode: Never
  natOutgoing: false
  nodeSelector: rack == '0'
---
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
  name: test-ippool2
spec:
  blockSize: 24
  cidr: 10.254.1.0/24
  ipipMode: Never
  natOutgoing: false
  nodeSelector: rack == '1'
---
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
  name: test-ippool3
spec:
  blockSize: 24
  cidr: 10.254.2.0/24
  ipipMode: Never
  natOutgoing: false
  nodeSelector: rack == '2'
---
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
  name: test-ippool1
spec:
  blockSize: 24
  cidr: 10.254.3.0/24
  ipipMode: Never
  natOutgoing: false
  nodeSelector: rack == '3'
  1. create a deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox-ext
spec:
  replicas: 1
  selector:
     matchLabels:
       app: busybox-ext
  template:
    metadata:
      labels:
        app: busybox-ext
      annotations:
        "cni.projectcalico.org/ipv4pools": "[\"test-ippool\"]"
    spec:
      containers:
        - name: test-container
          image: registry.cn-hangzhou.aliyuncs.com/google_containers/busybox
          command: [ "sh", "-c"]
          args:
          - while true; do
              echo -en '\n';
              printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE;
              printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT;
              sleep 100;
            done;
          env:
            - name: MY_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: MY_POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: MY_POD_SERVICE_ACCOUNT
              valueFrom:
                fieldRef:
                  fieldPath: spec.serviceAccountName
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - topologyKey: kubernetes.io/hostname
              labelSelector:
                matchLabels:
                  app: busybox-ext
  1. check ip address for the pod
 [root@k8s-master ~]# kubectl get pods -o wide
 NAME                           READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
 busybox-ext-6854978799-tjqwn   1/1     Running   0          6s    10.254.0.0   k8s-work1   <none>           <none>

Your Environment

[root@k8s-master ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:47:41Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

[root@k8s-master ~]# cat /etc/centos-release
CentOS Linux release 7.8.2003 (Core)

[root@k8s-master ~]# calicoctl version
Client Version:    v3.15.0
Git commit:        7987fc57
Cluster Version:   v3.15.1
Cluster Type:      k8s,bgp,kubeadm,kdd

  • Calico version
  • Orchestrator version (e.g. kubernetes, mesos, rkt):
  • Operating System and version:
  • Link to your project (optional):
@fasaxc
Copy link
Member

fasaxc commented Aug 3, 2020

10.254.0.0 is an invalid ip address

No it's not, why do you say that?

@pfwang80s
Copy link
Author

Network address and Broadcast address can't be node addresses.

Network address is first address in the network and it is used for identification network segment. All the IP addresses, using the same network address part, are in the same network segment. Because network address is first address in the network, it can not be random IP address, but it must mach with network mask in a binary view, for last bits in the network address must be zeros, as long as mask has zeros.

our cidr is set to 10.254.0.0/24, so 10.254.0.0 & 10.254.0.255 are invalid.

@fasaxc
Copy link
Member

fasaxc commented Aug 4, 2020

Calico does /32 routing so, when you see a /24 or /26 route from Calico, it's actually an aggregate route for a group of /32s. /32s don't have broadcast or network addresses and we haven't come across any modern routers that can't handle that approach. Does it actually not work with your router?

@caseydavenport
Copy link
Member

@pfwang80s
Copy link
Author

Thank you:)

@zhutong196
Copy link

hello 我根据你这个流程在现有k8集群中创建calico, 《kubectl create -f https://docs.projectcalico.org/manifests/calico.yaml》不需要修改文件吗?我这样部署是有问题的,在启动pod时候提示cni 未认证?

@song-jiang
Copy link
Member

@zhutongcloud cni未认证?能提供一下具体的出错信息么?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants