I am manually installing Kubernetes master components and when it comes to kube-controller-manager, I am not able to start this service.
Cluster information:
Kubernetes version: 1.26
Cloud being used: (put bare-metal if not on a public cloud) VM
Installation method: Manual
Host OS: Oracle Linux 8
CNI and version: 1.26
CRI and version: 1.7.0
Here is the systemd file, it starts the service but then in the next second it fails again, all the certificate files are put properly in place:
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/bin/kube-controller-manager \
--bind-address=127.0.0.1 \
--service-cluster-ip-range=10.32.0.0/24 \
--allocate-node-cidrs=true \
--cluster-cidr=10.200.0.0/16 \
--cluster-name=poc-k8s \
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \
--authentication-kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \
--authorization-kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \
--leader-elect=true \
--cluster-signing-cert-file=/var/lib/kubernetes/ca.crt \
--cluster-signing-key-file=/var/lib/kubernetes/ca.pem \
--root-ca-file=/var/lib/kubernetes/ca.crt \
--service-account-private-key-file=/var/lib/kubernetes/service-account.pem \
--use-service-account-credentials=true \
--v=2
Restart=on-failure
RestartSec=5