We are migrating our bare-metal deployment to containers using k8s.
In our bare-metal deployment, during taking the backup on NFS, randomly we used to face issues related to memory in certain cases where we used to tar/compress and write large data to NFS mount. To overcome this, limitcache was introduced and MemoryLimit was imposed on the tar process.
systemd-run --scope -p MemoryLimit =500M tar
Now when migrating to containers, just to make sure we don’t face this issue in containers, I don’t seem to find any way to limit memory/cache usage to a specific process.
Is there any way to achieve this?
Any other alternative to achieve this so that in any case if my backup is causing memory issues, it doesn’t impact my pod/application or traffic?