After successfully running my first kubeadm join
, of a worker node to the existing control node, I see three pods on the new node, but all have a status of CreateContainerError
or Init:CreateContainerError
. I’m not sure how to debug this.
describe pod
of the kube-proxy
pod (that’s on the new node) gives a curious Events list:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 89s (x775 over 170m) kubelet Container image "registry.k8s.io/kube-proxy:v1.31.2" already present on machine
That’s it. I also see in the output
IP:
IPs: <none>
Not sure whether these should have values yet.
In journalctl
, I see a lot of these from kubelet
:
E1113 … driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W1113 … driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
E1113 … plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
journalctl --grep=CreateContainerError
returns messages like:
CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
Cluster information:
This is an on-prem system, one physical machine, nodes are Xen domUs.
What confounds me about this is that the control node should be set up the same way as the worker node domU. Yet I don’t see the issue over there.