What is required to patch Kubernetes cluster configuration?

Cluster information:

Kubernetes version: 1.24
Cloud being used: bare metal
Installation method: kubeadm
Host OS: Linux (Ubuntu)
CNI and version: Calico 0.3.1 (I think)
CRI and version: CRI-O (not sure of version)

I need to adjust the configuration of some of the lower-level applications within my cluster. Specifically, controller manager, scheduler, proxy and etcd. I found documentation that discusses this as follows:

After reading this documentation, there are apparently a number of things that need to be done:

  • Edit ConfigMap kubeadm-config in the kube-system namespace to modify configuration in the form of extraArgs for the controller manager and scheduler services.
  • Edit ConfigMap kube-proxy in the kube-system namespace to modify configuration for K8s proxy pods.
  • Additionally, the changes need to be reflected in the actual configuration files located on the master node(s) in the /etc/kubernetes/manifests directory.
  • Finally, patch files should be created and maintained on the master nodes. These files would be used in the event that the cluster is upgraded at some point.

It is this last bullet that is the focus of this post. Until reading the documentation referenced above, I have no prior experience with patching a cluster. I have a number of questions.

  • Can the patch process be used to patch anything in a cluster, or is it limited to specific items, such as a deployment, for example?
  • If the patch manifests should be kept on the master node(s), where should they be placed? Is there a convention for location and file naming?
  • What is needed in the patch file to be valid?

For example, for the controller manager, I need to modify the bind address from 127.0.0.1 to 0.0.0.0. I’m not sure how much of the config I need to specify. Here is the kube-controller-manager command under spec.containers[0].command:

spec:
  containers:
  - command:
    - kube-controller-manager
    - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --bind-address=127.0.0.1
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --cluster-name=kubernetes
    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
    - --controllers=*,bootstrapsigner,tokencleaner
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --leader-elect=true
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --root-ca-file=/etc/kubernetes/pki/ca.crt
    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
    - --use-service-account-credentials=true

What do I need to specify in the patch file? Could be as simple as:

spec:
  containers:
  - command:
    - --bind-address=0.0.0.0

Or, do I need to provide more? It’s not clear.

1 Like