Ephemeral Storage with emptyDir – Data Still Written to Disk not ephemeral storage

Cluster information:

Kubernetes version: 1.29.9
Cloud being used: Azure
Installation method: AKS managed
Host OS: Ubuntu 22.04

I’ve observed that even when specifying ephemeral storage resource requests/limits and using an emptyDir volume in our Pod, data ends up being written to disk rather than staying in ephemeral space.

Observed Behavior:
In our case, the Pod’s configuration uses an emptyDir (without specifying the medium). When checking the mount point (e.g., via mount | grep /rocksdb-state), it shows the volume is backed by /dev/root (an ext4 disk). For example, the output is :- /dev/root on /rocksdb-state type ext4

Steps to Reproduce:

  1. Create a Pod with a configuration example:
    containers:
  • name: test-container
    image: ubuntu
    volumeMounts:
    • name: rocksdb-storage
      mountPath: /rocksdb-state
      resources:
      requests:
      ephemeral-storage: “30Gi”
      limits:
      ephemeral-storage: “40Gi”
      volumes:
  • name: rocksdb-storage
    emptyDir: {}
  1. Deploy the Pod and exec into it.
  2. after verifying that data is being written to /rocksdb-state and inspect the mount using:
    mount | grep “/rocksdb-state”

expected Behavior:
From the documentation, emptyDir volumes by default use the node’s disk unless explicitly configured to use memory.
expected after setting ephemeral-storage resource requests/limits might cause the data to be managed as truly ephemeral storage. However, without specifying medium: "Memory", the emptyDir remains disk-backed.

Actual Behavior:
the emptyDir volume is disk-backed even though it’s intended for ephemeral storage. This creates some confusion regarding the interaction between ephemeral-storage requests/limits and the emptyDir volume.

I’m open to suggestions, corrections or alternative approaches that might help touse ephemeral storage instead of disk for my usecase.
Thanks in advance