一直学一直嗨,一直嗨一直学

为什么不能在同一个Pod中创建两个使用相同端口的容器

实验前序:

通过一个简单的实验,告诉大家,如何去避免错误,如何排查错误,解决思路。

不说了,直接开怼实验

创建一个yaml文件:

[root@k8s-master ~]# cat pod-1.yaml  apiVersion: v1  kind: Pod  metadata:    name: test-pod    labels:      app: test      version: v2  spec:    containers:      - name: nginx        imagePullPolicy: IfNotPresent        image: nginx:1.17      - name: nginx-1        image: nginx:1.17        imagePullPolicy: IfNotPresent

创建Pod

[root@k8s-master ~]# kubectl apply -f pod-1.yaml  pod/test-pod created

查看到最开始我们创建Pod(test-pod)里的两个容器是成功了

[root@k8s-master ~]# kubectl get pods  NAME                         READY   STATUS    RESTARTS   AGE  myapp-pod                    2/2     Running   0          50m  mysql-rc-jjsmp               1/1     Running   2          48d  mysql-rc-lc22m               1/1     Running   2          52d  mysql-rc-nkdrh               1/1     Running   2          48d  nginx-app-756ffb5cc8-52hdh   1/1     Running   0          4d17h  nginx-app-756ffb5cc8-bgcbh   1/1     Running   0          4d17h  nginx-app-756ffb5cc8-smztw   1/1     Running   0          4d17h  test-pod                     2/2     Running   0          4s

我们再次查看Pod信息,发现被重启了一次,且状态为Error了,这是因为两个容器的端口被占用了。
因为一个Pod的容器共享一个网络栈

[root@k8s-master ~]# kubectl get pods  NAME                         READY   STATUS    RESTARTS   AGE  myapp-pod                    2/2     Running   0          50m  mysql-rc-jjsmp               1/1     Running   2          48d  mysql-rc-lc22m               1/1     Running   2          52d  mysql-rc-nkdrh               1/1     Running   2          48d  nginx-app-756ffb5cc8-52hdh   1/1     Running   0          4d17h  nginx-app-756ffb5cc8-bgcbh   1/1     Running   0          4d17h  nginx-app-756ffb5cc8-smztw   1/1     Running   0          4d17h  test-pod                     1/2     Error     1          8s

查看我们创建的Pod的描述信息,发现nginx-1这个容器是错误的

[root@k8s-master ~]# kubectl describe pods test-pod  Name:         test-pod  Namespace:    default  Priority:     0  Node:         k8s-node3/42.51.227.116  Start Time:   Sat, 21 Nov 2020 07:38:19 +0000  Labels:       app=test                version=v2  Annotations:  Status:  Running  IP:           10.244.2.107  IPs:    IP:  10.244.2.107  Containers:    nginx:      Container ID:   docker://06d1d912cd1a2d2875329feb61efba39c351f5fa316588acfc2e0a8d1d566557      Image:          nginx:1.17      Image ID:       docker-pullable://nginx@sha256:6fff55753e3b34e36e24e37039ee9eae1fe38a6420d8ae16ef37c92d1eb26699      Port:           <none>      Host Port:      <none>      State:          Running        Started:      Sat, 21 Nov 2020 07:38:20 +0000      Ready:          True      Restart Count:  0      Environment:    <none>      Mounts:        /var/run/secrets/kubernetes.io/serviceaccount from default-token-hvlcv (ro)    nginx-1:      Container ID:   docker://78766f6520e841c3b7a9cc8e2c6bec3f50acce142df606e85e14ab85aa6465ef      Image:          nginx:1.17      Image ID:       docker-pullable://nginx@sha256:6fff55753e3b34e36e24e37039ee9eae1fe38a6420d8ae16ef37c92d1eb26699      Port:           <none>      Host Port:      <none>      State:          Terminated        Reason:       Error        Exit Code:    1        Started:      Sat, 21 Nov 2020 07:38:38 +0000        Finished:     Sat, 21 Nov 2020 07:38:41 +0000      Last State:     Terminated        Reason:       Error        Exit Code:    1        Started:      Sat, 21 Nov 2020 07:38:24 +0000        Finished:     Sat, 21 Nov 2020 07:38:26 +0000      Ready:          False      Restart Count:  2      Environment:    <none>      Mounts:        /var/run/secrets/kubernetes.io/serviceaccount from default-token-hvlcv (ro)  Conditions:    Type              Status    Initialized       True    Ready             False    ContainersReady   False    PodScheduled      True  Volumes:    default-token-hvlcv:      Type:        Secret (a volume populated by a Secret)      SecretName:  default-token-hvlcv      Optional:    false  QoS Class:       BestEffort  Node-Selectors:  <none>  Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s                   node.kubernetes.io/unreachable:NoExecute for 300s  Events:    Type     Reason     Age                From                Message    ----     ------     ----               ----                -------    Normal   Scheduled  <unknown>          default-scheduler   Successfully assigned default/test-pod to k8s-node3    Normal   Pulled     29s                kubelet, k8s-node3  Container image "nginx:1.17" already present on machine    Normal   Created    29s                kubelet, k8s-node3  Created container nginx    Normal   Started    29s                kubelet, k8s-node3  Started container nginx    Normal   Pulled     11s (x3 over 29s)  kubelet, k8s-node3  Container image "nginx:1.17" already present on machine    Normal   Created    11s (x3 over 29s)  kubelet, k8s-node3  Created container nginx-1    Normal   Started    11s (x3 over 29s)  kubelet, k8s-node3  Started container nginx-1    Warning  BackOff    8s (x2 over 22s)   kubelet, k8s-node3  Back-off restarting failed container

我们再通过查看这个Pod里的nginx-1这个容器的日志,发现确实80端口被占用导致容器错误

[root@k8s-master ~]# kubectl logs test-pod -c nginx-1  2020/11/21 07:39:55 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)  nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)  2020/11/21 07:39:55 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)  nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)  2020/11/21 07:39:55 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)  nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)  2020/11/21 07:39:55 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)  nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)  2020/11/21 07:39:55 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)  nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)  2020/11/21 07:39:55 [emerg] 1#1: still could not bind()  nginx: [emerg] still could not bind()

原文出处:myit -> https://myit.icu/index.php/archives/879/

Tags: