GKE scaled-down nodes won't terminate

Hello,

(This is a GKE-specific question, if there’s a better forum please let me know).

I have a cluster that can horizontally autoscale , I’ve used both regular and preemptible node pools and in both cases, after the cluster scales up properly due to CPU workload and then scales down after the workload is done, there’s always 1-3 nodes that stay alive “forever” (several days) with no pods on it, only systems resources, exactly these three:

kube-proxy, metrics-server, metadata-agent, fluentd, using a total of 0.3 CPU and 400MB

Is there a way to tune k8s so these nodes will be terminated?

Thanks!

GKE manages the number of replicas very slowly… If you try to set replicas, the master will just set them back.

I run three+ node pools in gke for this reason:

  • gke base pool 1-3 (zonal/regional) nodes (f1micro/g1small)
  • gke autoscaling pool 0…n nodes (f1micro/g1small)
  • my own pools with taint NO_EXECUTE that my workload tolerates, but GKE system pods don’t.

This ensures that some GKE workloads scale, but don’t leak in to my nodes preventing scale down. Removing autoscaling GKE pool might be okay if your cluster does not get too large.

This setup allows you to also manually scale down the node autoscaling pool. You can also scale the base pool to zero if manually scale it back up when you need the cluster again.

There is an github issue open about this, but I can’t find it.

1 Like
1 Like

Do you have any idea if this bug still relevant in GKE v1.12.6-gke.7?
It seems that answer is ‘yes’ because I encountered the same behaviour - no downscale.
@matti - thank you for suggested workaround, but is there a chance that simpler solution exist?
For example, can’t we just reconfigure ’ kube-system’ pods do not use ‘critical-pod’ annotation?

Thanks,
Vitaly

nope, welcome to Managed Kubernetes by google :wink:

Is there a solution to this problem? The pod disruption budget workaround no longer seems to be effective, and the taint/toleration solution doesn’t work either now that a few of the system Pods ignore all taints.

Im having the same issue here and it’s actually costing our business a fair bit of unnecessary money. Does anyone have any updates on this?? Appreciate the help