We want to use a Kubenetes cluster to process work items from a queue. Each work item will take from a few minutes to a few hours to complete (cannot predict upfront), and we want to have maximum isolation for each work item. Typical volume for the queue is about 2 million new work-items per day.
Our current plan is to have queue to store the work-items, with multiple pods running across many nodes in parallel. Each pod will take one work-item, process it (each work-item is independent of each other) and exit.
Since this means that the Kubenetes cluster needs to create millions of pod per day, we are concerned if this could pose an issue in the long run (e.g., there will be close to one billion pods created each year, could Kubenetes DB that’s used to track the pods have performance issues)? We are wondering if there is a limit (or recommended limit about how many pods to create per day)?