Limiting memory for a process inside the pod (container)

Hi,

We are migrating our bare-metal deployment to containers using k8s.

In our bare-metal deployment, during taking the backup on NFS, randomly we used to face issues related to memory in certain cases where we used to tar/compress and write large data to NFS mount. To overcome this, limitcache was introduced and MemoryLimit was imposed on the tar process.
e.g.
systemd-run --scope -p MemoryLimit =500M tar

Now when migrating to containers, just to make sure we don’t face this issue in containers, I don’t seem to find any way to limit memory/cache usage to a specific process.

Is there any way to achieve this?

Any other alternative to achieve this so that in any case if my backup is causing memory issues, it doesn’t impact my pod/application or traffic?

Hi are you looking for that: Resource Quotas | Kubernetes
?

Kubernetes does not have a way to limit one process, just a whole container.

@thockin, thanks for replying.

Is there any recommended way to take backup on k8s which could also take care of memory issues as I described in my post earlier?

I also thought of patching a container to the running pod so that I could take my backup from this temporary container and in case of any memory issues, my application container is not impacted. But it seems I can’t also attach a container to my running pod.

My k8s version is 1.15.x

Thanks @geekbot, but I was looking for limiting memory for a process inside the pod, and not for the pod or container.

Run a separate container whose sole job it is to take snapshots, and apply appropriate resources there?

So having this container part of the same pod would have given it access to the shared volumes and then backup would have been easy. Keepong it separate means running the backup on application container itself?