Newbie here. I am using this link Hello Minikube - Kubernetes and was able to make a cluster of multiple pods. What I want is that if I make a request to the Nodeip of the minikube, it balances load across the pods byitself. I have seen this
but i am not sure if that would work on a single node to distributed incoming requests by itself across the pods. Can someone explain if this would do the need or if I need to do something else?
@rata Thanks for your response. I read that but still have 1 question. Does the auto loadscaling happen even if i change --type to NodePort instead of LoadBalancers. Also is it the ip of NodePort that --type=LoadBalancer --url outputs if using this with minikube? since there is no external Loadbalancer and the ip returned is not a public ip. Also Can you please clarify how loadbalancing occurs for a single node minikube cluster with multiple pods.
Moreover, how does the minikube handles the case when incoming requests are more than the pods in the node? I have tried to make 5 parallel requests to a node with 2 pods. I got response for only 2 requests with an error that âMax retries exceeded with urlâ. This suggests it does not have a buffer to hold more requests than the pods in the node. I am using Python multiprocessing pools to create processes and request in parallel.
Can you please tell if there is a better way to achieve this? My goal is to process a list of strings by spreading them across the pods in a node, but i cant find any way to do this in code except by sending parallel requests to node for each string.
No matter the service type it will try and direct traffic to at least one of the pods that match the selector. You can tune it a bit when using ipvs mode and specify the type of scheduler to use. See this blog post for some details: IPVS-Based In-Cluster Load Balancing Deep Dive
Ok, So I think I am getting the âMax retries exceeded with urlâ becuase of making too many parallel requests to the minikube node (See here for more about this issue https://stackoverflow.com/a/24899222/7343529) .
But then how can I process 50 strings using 5 pods in my single node cluster?
So I am able to make multiple requests without any error by adding some delay but some of my requests fails (500 status code) - never reach any pod. I read the Services document which describes âBy default, kube-proxy in iptables mode chooses a backend at random⊠If kube-proxy is running in iptables mode and the first Pod thatâs selected does not respond, the connection fails . This is different from userspace mode: in that scenario, kube-proxy would detect that the connection to the first Pod had failed and would automatically retry with a different backend Pod.â
How can I address requests not handled then if kube-proxy does not itself handle retries? On client side, it may not look very good because load should be balanced on server side.