Thanks for your time. My cluster seems to be fine other than I have to constantly restart kublet on the controller because after an irregular length of time I get connection refused using kubectl. Once kubelet is restarted I can use kubectl again. Sometimes it happens within a minute and other times it’s good for a few minutes.
I need to figure out how to permanently fix it though please.
I just noticed I still get connection refused even when kubelet is disabled.
Thanks for your help Theog75. I just built this cluster as my first dabble with Kubernetes so I don’t know a whole lot about it at this point. It’s brand new and in fact I only have 1 test pod installed on it. Here’s the output from get pods -n kube-system:
I just did a kubectl get events --all-namespaces | grep -i $podname to look at the controller and I see a ton of killing and restarting events but they don’t give you any idea of the root cause.
I’m going to destroy and rebuild the cluster rather than waste a ton of time on this stuff right now.
Tried rebuilding from scratch and got the same problem. I’m wondering if the instructions I am following are too old to be good any more. I also tried the same instructions using version 1.24 but again the same problem persists.
I took a while to see your post. I faced same issue in Ubuntu 22.04 and once changed to 20.04 my kube-system pods are not restarting intermittently. I guess Ubuntu 22.04 have some incompatibility or need some extra config to run kubelet. OBS: my Ubuntu VM are running on VirtualBox in Windows, I didnt look deeper since these envs are for personal testing/learning only.