Kubernetes CNI pod failing with kernel error

deployed 4 node RKE k8s cluster using Oracle Virtual Machine Centos 7.9…

cluster creation done successfully but I can see below error in CNI pod ,also having an error in DNS pod …

deployed 4 node RKE k8s cluster using Oracle Virtual Machine Centos 7.9…

cluster creation done successfully but I can see below error in CNI pod ,also having an error in DNS pod …

2023-05-10 19:17:46.747 [INFO][47] status-reporter/watchersyncer.go 130: Sending status update Status=in-sync
2023-05-10 19:17:46.755 [INFO][44] cni-config-monitor/token_watch.go 225: Update of CNI kubeconfig triggered based on elapsed time.
2023-05-10 19:17:46.756 [INFO][44] cni-config-monitor/token_watch.go 279: Wrote updated CNI kubeconfig file. path=“/host/etc/cni/net.d/calico-kubeconfig”
2023-05-10 19:17:46.922 [WARNING][45] felix/int_dataplane.go 504: Can’t enable XDP acceleration. error=kernel is too old (have: 3.10.0-1160 but want at least: 4.16.0)
2023-05-10 20:36:43.667 [INFO][47] status-reporter/watchercache.go 248: Failed to create watcher ListRoot=“/calico/resources/v3/projectcalico.org/caliconodestatuses” error=Get “https://10.43.0.1:443/apis/crd.projectcalico.org/v1/caliconodestatuses?resourceVersion=2845&watch=true”: dial tcp 10.43.0.1:443: connect: connection refused performFullResync=false
2023-05-10 20:36:44.166 [INFO][47] status-reporter/watchercache.go 181: Full resync is required ListRoot=“/calico/resources/v3/projectcalico.org/caliconodestatuses”
2023-05-10 20:36:44.173 [INFO][47] status-reporter/watchercache.go 194: Failed to perform list of current data during resync ListRoot=“/calico/resources/v3/projectcalico.org/caliconodestatuses” error=Get “https://10.43.0.1:443/apis/crd.projectcalico.org/v1/caliconodestatuses?limit=500&resourceVersion=2845&resourceVersionMatch=NotOlderThan”: dial tcp 10.43.0.1:443: connect: connection refused
2023-05-10 20:36:45.174 [INFO][47] status-reporter/watchercache.go 181: Full resync is required ListRoot=“/calico/resources/v3/projectcalico.org/caliconodestatuses”
2023-05-10 20:36:45.175 [INFO][47] status-reporter/watchercache.go 194: Failed to perform list of current data during resync ListRoot=“/calico/resources/v3/projectcalico.org/caliconodestatuses” error=Get “https://10.43.0.1:443/apis/crd.projectcalico.org/v1/caliconodestatuses?limit=500&resourceVersion=2845&resourceVersionMatch=NotOlderThan”: dial tcp 10.43.0.1:443: connect: connection refused
2023-05-10 20:36:46.853 [INFO][47] status-reporter/watchercache.go 181: Full resync is required ListRoot=“/calico/resources/v3/projectcalico.org/caliconodestatuses”
2023-05-10 20:36:46.977 [INFO][47] status-reporter/watchercache.go 194: Failed to perform list of current data during resync ListRoot=“/calico/resources/v3/projectcalico.org/caliconodestatuses” error=Get “https://10.43.0.1:443/apis/crd.projectcalico.org/v1/caliconodestatuses?limit=500&resourceVersion=2845&resourceVersionMatch=NotOlderThan”: dial tcp 10.43.0.1:443: connect: connection refused
2023-05-10 20:36:48.817 [INFO][47] status-reporter/watchercache.go 181: Full resync is required ListRoot=“/calico/resources/v3/projectcalico.org/caliconodestatuses”
2023-05-10 20:36:59.014 [INFO][47] status-reporter/watchercache.go 194: Failed to perform list of current data during resync ListRoot=“/calico/resources/v3/projectcalico.org/caliconodestatuses” error=Get “https://10.43.0.1:443/apis/crd.projectcalico.org/v1/caliconodestatuses?limit=500&resourceVersion=2845&resourceVersionMatch=NotOlderThan”: net/http: TLS handshake timeout
2023-05-10 20:37:00.020 [INFO][47] status-reporter/watchercache.go 181: Full resync is required ListRoot=“/calico/resources/v3/projectcalico.org/caliconodestatuses”
2023-05-10 20:37:10.052 [INFO][47] status-reporter/watchercache.go 194: Failed to perform list of current data during resync ListRoot=“/calico/resources/v3/projectcalico.org/caliconodestatuses” error=Get calico link : net/http: TLS handshake timeout
2023-05-10 20:37:11.077 [INFO][47] status-reporter/watchercache.go 181: Full resync is required ListRoot=“/calico/resources/v3/projectcalico.org/caliconodestatuses”
[rke@kube-master rke-k8s]$

need urgent help

need urgent help

can someone help me urgently

Hello,
I suppose that you are getting problem with centos firewall or selinux. On your log you get “dial tcp 10.43.0.1:443: connect: connection refused”

Could you please check with curl if you can connect to other nodes on port 443? You can try with ping too.

Juan.

unable to connect from master to other nodes —

[rke@kube-master ~]$ curl 192.168.1.239:443
curl: (7) Failed connect to 192.168.1.239:443; No route to host
[rke@kube-master ~]$

I created fresh cluster but it has so many errors —

[rke@kube-master ~]$ kubectl get pods -A
E0511 18:42:54.680292 10853 memcache.go:287] couldn’t get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0511 18:42:54.801008 10853 memcache.go:121] couldn’t get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0511 18:42:54.824152 10853 memcache.go:121] couldn’t get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0511 18:42:54.847629 10853 memcache.go:121] couldn’t get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx ingress-nginx-admission-create-wllvb 0/1 Completed 0 34m
ingress-nginx nginx-ingress-controller-4wx7n 0/1 CrashLoopBackOff 11 (2m52s ago) 34m
ingress-nginx nginx-ingress-controller-lk6bn 0/1 Running 10 (34s ago) 34m
ingress-nginx nginx-ingress-controller-qg2zr 0/1 CrashLoopBackOff 11 (2m28s ago) 34m
kube-system calico-kube-controllers-85d56898c-4b74r 0/1 CrashLoopBackOff 12 (2m58s ago) 35m
kube-system canal-4xtkv 2/2 Running 0 35m
kube-system canal-dtwvn 2/2 Running 9 (13m ago) 35m
kube-system canal-dv2mf 2/2 Running 0 35m
kube-system canal-mm2zj 2/2 Running 2 (14m ago) 35m
kube-system coredns-autoscaler-74d474f45c-7kg9d 1/1 Running 1 (18m ago) 35m
kube-system coredns-dfb7f8fd4-thn8b 0/1 Running 0 35m
kube-system metrics-server-8f8f896f-9f85f 0/1 CrashLoopBackOff 6 (10s ago) 6m27s
kube-system metrics-server-c47f7c9bb-w68dx 0/1 CrashLoopBackOff 12 (4m59s ago) 35m
kube-system rke-coredns-addon-deploy-job-npwzd 0/1 Completed 0 35m
kube-system rke-ingress-controller-deploy-job-b9x97 0/1 Completed 0 35m
kube-system rke-metrics-addon-deploy-job-zsbbf 0/1 Completed 0 35m
kube-system rke-network-plugin-deploy-job-zt82n 0/1 Completed 0 36m

selinux disabled for all 4 nodes in cluster

I think that the problem is your server configuration, not kubernetes itself. Can you ping your nodes from master to see if it’s respond to icmp ?

And if you don’t have a route to host it’s possible that your gateway in centos is not configured properly or virtual networks on virtualbox are wrong.

how I can avoid these errors

E0512 18:34:46.932562 15379 memcache.go:287] couldn’t get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0512 18:34:46.944367 15379 memcache.go:121] couldn’t get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0512 18:34:46.951074 15379 memcache.go:121] couldn’t get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0512 18:34:46.955359 15379 memcache.go:121] couldn’t get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request