The apiserver/kubelet are inactive

153:/var/snap/microk8s/current/var/kubernetes/backend# snap services

Service Startup Current Notes
lxd.activate enabled inactive -
lxd.daemon enabled inactive socket-activated
microk8s.daemon-apiserver enabled inactive -
microk8s.daemon-apiserver-kicker enabled active -
microk8s.daemon-cluster-agent enabled active -
microk8s.daemon-containerd enabled active -
microk8s.daemon-control-plane-kicker enabled inactive -
microk8s.daemon-controller-manager enabled inactive -
microk8s.daemon-etcd enabled inactive -
microk8s.daemon-flanneld enabled inactive -
microk8s.daemon-k8s-dqlite enabled active -
microk8s.daemon-kubelet enabled inactive -
microk8s.daemon-kubelite enabled active -
microk8s.daemon-proxy enabled inactive -
microk8s.daemon-scheduler enabled inactive -
microk8s.daemon-traefik enabled inactive -

I have installed the 1.23 version of microk8s.
There are 3 master and 3 worker nodes.
The above messages are one node of 3 masters.
Environment: Esxi 6.7
Coud you know how to resolve it ?

When I installed the single master, It had the same problem.
Environment : Virtualbox 5.2

install method : snap install microk8s --classic --channel=1.23/stable

This is normal, i think from 1.20 or 1.21 the apiserver, controller, scheduler, kube-proxy and kubelet are combined into kubelite.

Thats why you only see the following as active

microk8s.daemon-kubelite enabled active -
microk8s.daemon-containerd enabled active -
microk8s.daemon-apiserver-kicker enabled active -
microk8s.daemon-cluster-agent enabled active -
microk8s.daemon-k8s-dqlite enabled active -
1 Like

Thank you for your answers.

The reason why I have the question cannt reach kubernetes apiserver which are 3 maters from worker node.

Pd cidr: 10.2.0.0/16
Service cidr: 10.152.183.0/24
Kubernetes sevice ip: 192.168.185.1

It is pending status when I deploy apps to worker.
It well deploy to master.

The pending status message are 192.168.185.1 443 cant access.

I have changed the pod and service cidr with this url.

Is it fault to change the service and pod CIDR range?
I don’t know why the apps can’t deploy to worker node.

You can change the pod cidr and the service cidr.

What do you mean by pod cannot be deployed on to the worker node?

How many nodes in total do you have?
Thanks

I have configured the 3 EA master and 3EA worker nodes.

When I deploy both the application of nginx on the worker and master , The nginx deployment on the worker node is pending status but The nginx deployment on the master node is deployed well.

The calico-node messages is as follow:

kubectl logs calico-node-xcjkz -n kube-system on worker node

Hit error connecting to datastore - retry error=Get “https://10.152.185.1:443/api/v1/nodes/foo”: dial tcp 10.152.185.1:443: connect: network is unreachable

Chain KUBE-SERVICES (2 references)
target prot opt source destination
REJECT tcp – anywhere 10.152.185.11 /* default/webserver:http has no endpoints / tcp dpt:http reject-with icmp-port-unreachable
REJECT tcp – anywhere 10.152.185.124 /
istio-system/istiod:https-dns has no endpoints / tcp dpt:15012 reject-with icmp-port-unreachable
REJECT tcp – anywhere 10.152.185.52 /
istio-system/istio-egressgateway:http2 has no endpoints / tcp dpt:http reject-with icmp-port-unreachable
REJECT tcp – anywhere 10.152.185.212 /
monitoring/blackbox-exporter:https has no endpoints / tcp dpt:9115 reject-with icmp-port-unreachable
REJECT tcp – anywhere 10.152.185.212 /
monitoring/blackbox-exporter:probe has no endpoints / tcp dpt:19115 reject-with icmp-port-unreachable
REJECT tcp – anywhere 10.152.185.124 /
istio-system/istiod:grpc-xds has no endpoints / tcp dpt:15010 reject-with icmp-port-unreachable
REJECT tcp – anywhere 10.152.185.124 /
istio-system/istiod:http-monitoring has no endpoints / tcp dpt:15014 reject-with icmp-port-unreachable
REJECT tcp – anywhere 10.152.185.142 /
istio-system/istio-ingressgateway:status-port has no endpoints / tcp dpt:15021 reject-with icmp-port-unreachable
REJECT tcp – anywhere 10.152.185.142 /
istio-system/istio-ingressgateway:http2 has no endpoints / tcp dpt:http reject-with icmp-port-unreachable
REJECT tcp – anywhere 10.152.185.142 /
istio-system/istio-ingressgateway:https has no endpoints / tcp dpt:https reject-with icmp-port-unreachable
REJECT tcp – anywhere 10.152.185.118 /
my-nginx/webserver:http has no endpoints / tcp dpt:http reject-with icmp-port-unreachable
REJECT tcp – anywhere 10.152.185.52 /
istio-system/istio-egressgateway:https has no endpoints / tcp dpt:https reject-with icmp-port-unreachable
REJECT tcp – anywhere 10.152.185.124 /
istio-system/istiod:https-webhook has no endpoints */ tcp dpt:https reject-with icmp-port-unreachable

Do you have firewall configured between worker and main nodes?

Thank you for your supporting.

I have resolved itself.
The problem was endpoints.

When I hit the kubectl get endpoints, The results shows the public ip.

NAME ENDPOINTS AGE
kubernetes public ip1:16443, public ip2:16443, public ip3 :16443 8d

I configured the cluster with the Private ip

NAME ENDPOINTS AGE
kubernetes 192.168.148.152:16443,192.168.148.153:16443,192.168.148.154:16443 8d

I modified the kube-apiserver with option --advertise-address= private ip
It is resolved.
I don’t know why it taken the public ip.

Than you.

1 Like

Oh you have 2 network interface. Nice that you have resolved it.

1 Like