Does anyone know the reason for this logic? I find it really really not intuitive.
BTW, my use case that required limits and without requests:
I had a legacy application throwing core dumps (pm2 with >1 workers) to the disks. I didn’t want it to affect the whole node, preferred that it will evict the pod. I was surprised it added requests on the pod itself.
Note: If a container specifies its own memory limit, but does not specify a memory request, Kubernetes automatically assigns a memory request that matches the limit. Similarly, if a container specifies its own CPU limit, but does not specify a CPU request, Kubernetes automatically assigns a CPU request that matches the limit.
Requests are used by the scheduler when the pod has to schedule the pod to a node (to check if the node has the requested amount of CPU or RAM available). So, the request is a “pre-requisite”. Limits control the amount of resources (CPU/RAM) that a container can use. If the container uses the Limit value of RAM, it gets evicted and is recreated. If the container uses the Limit value or CPU, it gets throttled.
I assume that the logic behind defaulting the request to the limit (if the request it’s not specified) is to provide the Scheduler the maximum amount of resources that the pod may need (specified in the Limit) as a reference to be computed by the Scheduler to designate the best node on which the pod will be assigned.
@Nir_Roz, there’s ways now where you can size the pods at the pod spec level without compromising the GitOps process. Let me know if you’d like to chat offline or DM me to discuss.