Unable to connect to pod

Hi i am getting below and i am new to technology and no clue. Could some one help how to to troubleshoot this issue.

[root@localhost ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 15 3d15h 10.244.1.12 gitlab
nginx 1/1 Running 1 4d 10.244.1.13 gitlab
nginx-75bd58f5c7-m2qbp 1/1 Running 1 3d15h 10.244.1.9 gitlab
nginx-75bd58f5c7-w552d 1/1 Running 1 3d15h 10.244.1.10 gitlab
[root@localhost ~]# kubectl exec busybox – curl 10.244.1.12
error: unable to upgrade connection: pod does not exist

What command/args did you create the busybox pod with?

Can you get sh access in it? kubectl exec -it busybox -- sh

no it’s throwing same error
error: unable to upgrade connection: pod does not exist

What command did you start busybox with?

open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 70m kubelet, gitlab Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container “4fd4687c777a2d40892d8bfce2b239c7fee3ab02114246a3599f35daf2737108” network for pod “busybox”: NetworkPlugin cni failed to set up pod “busybox_default” network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 69m kubelet, gitlab Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container “c9ed9d28c7009fe50a1b0da62730628a3eaacdd2ed41ed03911bd452b439b2d7” network for pod “busybox”: NetworkPlugin cni failed to set up pod “busybox_default” network: open /run/flannel/subnet.env: no such file or directory
Normal SandboxChanged 69m (x3 over 70m) kubelet, gitlab Pod sandbox changed, it will be killed and re-created.
Normal Pulled 35m (x3 over 68m) kubelet, gitlab Container image “radial/busyboxplus:curl” already present on machine
Normal Created 35m (x3 over 68m) kubelet, gitlab Created container
Normal Started 35m (x3 over 68m) kubelet, gitlab Started container
Warning FailedCreatePodSandBox 17m kubelet, gitlab Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container “27b43394962b403529a230108830b7b0367ea7440b56b566ab72ae0f5d002ec2” network for pod “busybox”: NetworkPlugin cni failed to set up pod “busybox_default” network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 17m kubelet, gitlab Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container “196e15c700d661a481fe934b893650835e4743a3b73c87719f656beab70f53dc” network for pod “busybox”: NetworkPlugin cni failed to set up pod “busybox_default” network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 16m kubelet, gitlab Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container “6170c35e4b25e29e99d72aa4c9509987aaf8ed90f21ccd0b0a24b735965212cd” network for pod “busybox”: NetworkPlugin cni failed to set up pod “busybox_default” network: open /run/flannel/subnet.env: no such file or directory
Normal SandboxChanged 16m (x4 over 18m) kubelet, gitlab Pod sandbox changed, it will be killed and re-created.
Normal Pulled 16m kubelet, gitlab Container image “radial/busyboxplus:curl” already present on machine
Normal Created 16m kubelet, gitlab Created container
Normal Started 16m kubelet, gitlab Started container
[root@localhost ~]# kubectl exec -it busybox – sh
error: unable to upgrade connection: pod does not exist
[root@localhost ~]#

i just copied the logs for you. Thanks for get in touch.

Can you can run the following command to see if you can get a connection to a pod at all.
kubectl run -it --rm --restart=Never busybox-test --image=busybox sh

My guess is that the pod came up for a bit then went down. I could debug more if I saw how it was created. Did you create the busybox image via a yaml file or through the command line?

i now getting different error.
[root@localhost ~]# kubectl run -it --rm --restart=Never busybox-test --image=busybox sh
If you don’t see a command prompt, try pressing enter.
Error attaching, falling back to logs: unable to upgrade connection: pod does not exist
pod “busybox-test” deleted
Error from server (NotFound): the server could not find the requested resource ( pods/log busybox-test)

i have created through yaml.

How is your cluster set up?

Also is it still showing that the first busybox is running?

yes.
[root@localhost ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 22 3d17h 10.244.1.25 gitlab
nginx 1/1 Running 3 4d2h 10.244.1.20 gitlab
nginx-75bd58f5c7-m2qbp 1/1 Running 3 3d17h 10.244.1.23 gitlab
nginx-75bd58f5c7-w552d 1/1 Running 3 3d17h 10.244.1.21 gitlab

I’m thinking something else is going on. Looking at the logs you provided earlier looks like the container network interface is giving you some issues which might be related.

Did you set up the cluster yourself or are you using a hosted/managed solution?

i am using my own vagrant vm deployed virtualbox

cool. Is the kubelet or the api-server giving any weird errors?

how to check this.

Assuming you are running a distro with systemd installed.

journalctl -u kubelet probably only need the past 100 or logs journalctl -u kubelet | tail -n 100

Have you checked:

https://stackoverflow.com/questions/51154911/kubectl-exec-results-in-error-unable-to-upgrade-connection-pod-does-not-exi

And

https://github.com/kubernetes/kubernetes/issues/63702

Most likely you need to specify the proper IP/network interface.

Let me know how it goes :slight_smile:

Nice find with those links :grinning:

i have gone through those posts Rata.
here is command i used to setup master.
kubeadm init --apiserver-advertise-address=192.168.56.103 --pod-network-cidr=10.244.0.0/16

[root@localhost ~]# ifconfig -a
datapath: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1376
inet6 fe80::aca1:67ff:fe75:8f45 prefixlen 64 scopeid 0x20
ether ae:a1:67:75:8f:45 txqueuelen 0 (Ethernet)
RX packets 7 bytes 530 (530.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8 bytes 648 (648.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:1e:44:72:de txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

dummy0: flags=130<BROADCAST,NOARP> mtu 1500
ether 22:ce:79:b7:61:63 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.56.103 netmask 255.255.255.0 broadcast 192.168.56.255
inet6 fe80::a00:27ff:fe71:df7 prefixlen 64 scopeid 0x20
ether 08:00:27:71:0d:f7 txqueuelen 1000 (Ethernet)
RX packets 34750 bytes 3061888 (2.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 43242 bytes 35408226 (33.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.3.15 netmask 255.255.255.0 broadcast 10.0.3.255
inet6 fe80::10be:ab58:3fed:2c4d prefixlen 64 scopeid 0x20
ether 08:00:27:c4:94:a6 txqueuelen 1000 (Ethernet)
RX packets 32 bytes 7633 (7.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 51 bytes 5428 (5.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.0.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::3864:eeff:fe99:43bc prefixlen 64 scopeid 0x20
ether 3a:64:ee:99:43:bc txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 0 (Local Loopback)
RX packets 830556 bytes 219707892 (209.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 830556 bytes 219707892 (209.5 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

vethwe-bridge: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1376
inet6 fe80::dc6a:21ff:fe4a:688e prefixlen 64 scopeid 0x20
ether de:6a:21:4a:68:8e txqueuelen 0 (Ethernet)
RX packets 10 bytes 788 (788.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 16 bytes 1296 (1.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

vethwe-datapath: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1376
inet6 fe80::e82a:7fff:fed2:95f prefixlen 64 scopeid 0x20
ether ea:2a:7f:d2:09:5f txqueuelen 0 (Ethernet)
RX packets 830556 bytes 219707892 (209.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 830556 bytes 219707892 (209.5 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

weave: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1376
inet6 fe80::709d:3aff:fef1:c6cd prefixlen 64 scopeid 0x20
ether 72:9d:3a:f1:c6:cd txqueuelen 0 (Ethernet)
RX packets 7 bytes 432 (432.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8 bytes 648 (648.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

i dont know what to according to those post

i am unable to paste my logs here.