Hi,
I am learning to deploy a multi-instance kafka community version in a home lab, and I am facing some challenges relating to how kafka nodes must be configured to be aware of each other. I am using KRaft controller.
Firstly, each kafka pod needs the node.id
, variable, it is an unique interger within the cluster and it is unique across controllers and brokers.
Secondly, the controller.quorum.voters
, variable that must contain a list of quorum voters, which can change as the cluster node number changes.
k8s manifest provides a few limited ways to inject data into applications. But I have not figured out how to use them to dynamically scale my kafka cluster. I feel like these deployment requirements make it impossible for kafka nodes to dynamically scale up or down, right?
I know home lab is not supposed to be so complicated and I’d better off using confluent’s product if needed. But I want to learn if k8s deployment tools alone can help me solve these challenges.
Thanks.