In the context of a small cluster limited to 3 nodes.
I have a docker image of a Python script that allocate a few hundreds mb of memory and then shut down after a few seconds. I managed to trigger it from another (long running) service via the Kubernetes API by creating a job for it and launching it with desired paramey, using the random name feature.
Now I might need to trigger that job pretty often, like maybe up to a 10 sec delay in some periods. Is this acceptable or ill advised? Is there something I should watch for?
In a normal PC terminal you can launch a batch file in repetition without too much problem. But in the context of dockers, and then kubernetes pods, all the overhead of memory sandbox, assigning an ip, mounting volumes, the internal processes of kubernetes rules and database, the system logs of it all, etc. I feel like it could be an issue.
I know cronjobs can’t be set to run under a minute.
Would I gain to modify that Python program to be long running program with a web service that would instead trigger it’s work via an http request? Or am I just searching a solution to something that’s not a problem?