Service Startup Current Notes
lxd.activate enabled inactive -
lxd.daemon enabled inactive socket-activated
microk8s.daemon-apiserver enabled inactive -
microk8s.daemon-apiserver-kicker enabled active -
microk8s.daemon-cluster-agent enabled active -
microk8s.daemon-containerd enabled active -
microk8s.daemon-control-plane-kicker enabled inactive -
microk8s.daemon-controller-manager enabled inactive -
microk8s.daemon-etcd enabled inactive -
microk8s.daemon-flanneld enabled inactive -
microk8s.daemon-k8s-dqlite enabled active -
microk8s.daemon-kubelet enabled inactive -
microk8s.daemon-kubelite enabled active -
microk8s.daemon-proxy enabled inactive -
microk8s.daemon-scheduler enabled inactive -
microk8s.daemon-traefik enabled inactive -
I have installed the 1.23 version of microk8s.
There are 3 master and 3 worker nodes.
The above messages are one node of 3 masters.
Environment: Esxi 6.7
Coud you know how to resolve it ?
When I installed the single master, It had the same problem.
Environment : Virtualbox 5.2
This is normal, i think from 1.20 or 1.21 the apiserver, controller, scheduler, kube-proxy and kubelet are combined into kubelite.
Thats why you only see the following as active
microk8s.daemon-kubelite enabled active -
microk8s.daemon-containerd enabled active -
microk8s.daemon-apiserver-kicker enabled active -
microk8s.daemon-cluster-agent enabled active -
microk8s.daemon-k8s-dqlite enabled active -
I have configured the 3 EA master and 3EA worker nodes.
When I deploy both the application of nginx on the worker and master , The nginx deployment on the worker node is pending status but The nginx deployment on the master node is deployed well.
The calico-node messages is as follow:
kubectl logs calico-node-xcjkz -n kube-system on worker node