K8s.gcr.io who takes care of updating kube-system images

Cluster information:

Kubernetes version: 1.23.16-gke.1100
Cloud being used:GCP
Installation method: Manual

Regarding the k8s.gcr.io freeze we checked our cluster and found some manifests which still use images from the k8s.gcr.io registry. They belong to the kube-system:

  • metadata-proxy
  • ingress-nginx-controller
  • kube-dns-autoscaler
  • l7-default-backend
  • monitoring-kube-state-metrics
  • kube-proxy

Will those be updated automatically (when upgrading to a newer k8s version) or do we need to updated them manually by our selfs?

Thanks for the help!

The answer is mixed.

First: you say: “Installation method: Manual” but you list a GKE version. I’m going to assume that means you are using GKE, but if you manually turned up a cluster from those images, the answer is different :slight_smile:

Are your nodes all upgraded to the same version as your control-plane? I turned on a GKE cluster at that exact version and I see no k8s.gcr.io images at all, and most of the images you list are node-oriented.

ingress-nginx-controller is NOT something GKE ships - you must have installed that yourself, so you’ll have to upgrade it yourself. The good news is that the same image exists at the new name, so just changing the image name will suffice (and any saved references like helm charts)

Thanks for the answer!

I was not sure which parts of the setup are relevant, so yes, it is a GKE cluster, but without autopilot (that is why i wrote “manual”).

And since it is a GKE cluster the control plane and all the nodes are on the same version. If you could setup a cluster with the same version, why is it, that our cluster still used k8s.gcr.io versions of the kube-system related services?

The ingress-nginx-controller is indeed something from us, thanks for the hint.

It’s not beyond the realm of possibility that some bug over time has left detritus in your cluster. I have not heard or seen such a bug, but it’s possible. Are you willing to share more information?

If so, I’d love to see the full kubectl get -o yaml for each of these pods, as well as gcloud container clusters describe (you can mask info that is sensitive like external IPs, cert info, or project name/number, etc).

Figured out where they were comming from. I was confused by old replica sets laying around which referenced the old image. As well there is a list of images in the nodes manifest which has the old images in there.

The only kinda “real” leftover is the metadata-proxy DaemonSet which is there but has no pod active.

That was left to cover upgrades from old nodes, and has a selector that should not select any modern nodes.

Glad to hear it’s not a real issue! You had me a little worried, not gonna lie. I even turned on a 1.21 cluster and was walking through upgrades :slight_smile: