Kube-controller-manager is missing on GKE

So, here’s the story. We are using GKE and are running a few services in a couple of clusters with a couple namespaces, etc. Today, I was trying to do some troubleshooting of certain things and tried to access kube-controller-manager as described in many various sources over the internet.

  1. So I have connected to a cluster using gcloud cli
  2. I have tried performing kubectl get pods -namespace=kube-system
  3. I observed no kube-controller-manager there at all
  4. I have connected to the VM that is running k8s node (I have actually tried connecting to all of them with no luck) and tried searching for /var/logs/kube-controller-manager.log - no luck
  5. I have also checked the /etc/kubernetes/manifests and found only kube-proxy manifests there.

I am certain that not that long ago I have tried doing the same and kube-controller-manager pod was there and I could find all the logs.
Can someone point me to the right direction here?
Is this something related to the fact that I am not connecting to the correct node? (I can’t see any special nodes to be honest, I tried connecting to them all, none of them seemed anyhow master)
Is it something new in terms of how k8s is set up on GKE?
Am I missing some IAM permissions?

Please help.

Controller manager runs on the master machine. That machine is not a
node in the cluster.

1 Like

Hi @ninjaboy, @thockin, I have the same problem you mention, including scheduler, I’m working mounting agents with helm templates and I couldn’t find how to solve it, I just see those are not in the pods but when a execute the command: kubectl get endpoints --all-namespaces both, the controller and scheduler are listed, but the only ones that HAS NO endpoint associated are controller and scheduler, as you see as follows:

Note: I replaced the Ips for 0’s

NAMESPACE     NAME                        ENDPOINTS                                                  AGE
default       kubernetes                  00.000.00.00:443                                           2y119d
default       vsts-agent                                                                             47h
kube-system   dashboard-metrics-scraper   00.000.00.00:8000                                          2y119d
kube-system   kube-controller-manager     <none>                                                     2y119d
kube-system   kube-dns                    00.000.00.00:53,00.000.00.00:53,00.000.00.00:53 + 3 more...2y119d
kube-system   kube-scheduler              <none>                                                     2y119d
kube-system   kubernetes-dashboard        00.000.00.00:8443                                          2y119d
kube-system   metrics-server              00.000.00.00:4443,00.000.00.00:4443                        2y119d

Have you find how to solve this issue?

Additional my pods are in state CrashLookBackOff and I don’t know if this issue is related with this state.

Thanks in advance.

I think these are listed as Endpoints because we used that a a lease in the past.

They are not pods running in the cluster - you cannot access them directly. What are you trying to achieve?