Unexpected increase in io.write_bytes on nodes after upgrading EKS to Kubernetes 1.34.1

Hi everyone,

Recently, after upgrading my EKS cluster to Kubernetes version 1.34.1, I noticed an unexpected increase in the metric kubernetes.io.write_bytes on multiple nodes whenever my CronJobs run.

Before the upgrade, the write I/O generated by these CronJobs was stable and relatively low. Right after upgrading, however, the write I/O on the nodes started to spike significantly during each CronJob execution, even though:

  • I did not change the CronJob images or configuration,
  • The workload behavior remains the same,
  • The issue appears consistently across different nodes,
  • I also observed an increase in system.fs.inodes.used during the same time window.

I’m trying to understand whether this behavior is expected with Kubernetes 1.34.x, whether it’s related to kubelet/containerd changes, or if there’s a known issue associated with this version.

If anyone has encountered a similar situation or has any references or insights, I would really appreciate your help.

Thanks in advance!


Cluster information:

Kubernetes version: 1.34.1 (EKS)
Cloud being used: AWS EKS
Installation method: Managed EKS cluster
Host OS: Amazon Linux 2 (EKS optimized AMI)