first of all, I know how to scale out on Kube, so my question is : is there any way to send a pod of FFmpeg, 4 commands which each one used to convert my video to a different format (ex. a 4k mkv video to mp4,x256,…). obviously, I want to do 4 commands at the same time, which I mean every container in pod responsible for converting a video to a specific format.
It really depends on “who” or in which context an app will “send” this.
Basically, communication is on your own (unless ffmpeg has some built-in communication for IPC or something, I don’t know).
This entity that needs to transcode using ffmpeg can, if it is appropriate for that, create 4 kubernetes jobs object to do it.
You may also want to communicate with a queue, and have the “who” enqueue and another deployment dequeue and does the ffmpeg processing. You can also use HPA to scale ffmpeg workers based on queue size or other metrics relevant for your use case. And, of course, probably plenty of other options (there are tons of ways to communicate processes :))
Which leads to the next question: where do you want to store the output of ffmpeg? I don’t know if you already considered that. Something like an object storage (S3, Google bucket, etc.)?
first of all thank you due to your reply that totally worked for me (queue, parallelism), and about store outputs of ffmpeg: I used chephfs :))