Running Multiple Instances on Same Node

Apologies in advance for a newbie question, but I am very curious. I have one workernode VM at the moment in my Kubernetes cluster (installed using kubeadm) and I would like to deploy 5 instances of the same application docker image using a Deployment kind. My concern is that all 5 instances will run on the same port, which will result to 4 instances unable to run due to port in use, but is there a way to solve this issue or do I need to do 4 different docker images of the same instance with different ports?

Cluster information:

Kubernetes version: v1.18.6
Cloud being used: bare-metal
Installation method: kubeadm
Host OS: Ubuntu 18.04
CNI and version: calico
CRI and version: Docker 19.03.6

1 Like

Please look at this:

Pod all have distinct IPs - what you are worried about shows a bit of
confusion about ports in K8s.

2 Likes

Ah yes you are right, I used to get distinct IP for all my Pod replicas when running my Deployment file, but issues occurred due to my network being limited to a specific sub net and when my pods get random IP sub nets I will not be able to access them. I came up with a temporarily solution by using the spec.hostNetwork: true in my Deployment file but as you can see I run into more problems when I have ports in use or multiple instances on the same node. Can you please suggest a solution for this. Please note that during my kubeadm init installation I did specify my pod network sub net by mentioning --pod-network-cidr=192.168.x.x/xx p.s: Congratulations on your anniversary

1 Like

After digging up some research about calico networking for my Kubernetes cluster, I managed to edit my daemon-set calico-node file by changing parameter CALICO_IPV4POOL_CIDR to match my IP address sub net and restarted the pods for each virtual machine. Please take a look at my environment and if there is anything else I need to do.

kubectl -n kube-system get pod --all-namespaces 
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
kube-system            calico-kube-controllers-5fbfc9dfb6-7vjj9     1/1     Running   0          27m
kube-system            calico-node-4xkfq                            0/1     Running   0          8m10s
kube-system            calico-node-gr25h                            0/1     Running   0          8m5s
kube-system            coredns-66bff467f8-fs9l6                     1/1     Running   0          41m
kube-system            coredns-66bff467f8-h8r9r                     1/1     Running   0          41m
kube-system            etcd-ubuntu                                  1/1     Running   0          41m
kube-system            kube-apiserver-ubuntu                        1/1     Running   0          41m
kube-system            kube-controller-manager-ubuntu               1/1     Running   0          41m
kube-system            kube-proxy-2wb47                             1/1     Running   0          41m
kube-system            kube-proxy-zs8p7                             1/1     Running   0          15m
kube-system            kube-scheduler-ubuntu                        1/1     Running   0          41m
kubernetes-dashboard   dashboard-metrics-scraper-6b4884c9d5-6f8cf   1/1     Running   0          26m
kubernetes-dashboard   kubernetes-dashboard-98f9b854b-w6cgb         1/1     Running   0          19m

Events for kubectl -n kube-system describe pod calico-node-gr25h

Events:
  Type     Reason     Age                    From                  Message
  ----     ------     ----                   ----                  -------
  Normal   Scheduled  8m45s                  default-scheduler     Successfully assigned kube-system/calico-node-gr25h to workernode2
  Normal   Pulled     8m44s                  kubelet, workernode2  Container image "calico/cni:v3.9.6" already present on machine
  Normal   Created    8m44s                  kubelet, workernode2  Created container upgrade-ipam
  Normal   Started    8m44s                  kubelet, workernode2  Started container upgrade-ipam
  Normal   Started    8m43s                  kubelet, workernode2  Started container install-cni
  Normal   Pulled     8m43s                  kubelet, workernode2  Container image "calico/cni:v3.9.6" already present on machine
  Normal   Created    8m43s                  kubelet, workernode2  Created container install-cni
  Normal   Pulled     8m42s                  kubelet, workernode2  Container image "calico/pod2daemon-flexvol:v3.9.6" already present on machine
  Normal   Created    8m42s                  kubelet, workernode2  Created container flexvol-driver
  Normal   Started    8m42s                  kubelet, workernode2  Started container flexvol-driver
  Normal   Started    8m41s                  kubelet, workernode2  Started container calico-node
  Normal   Pulled     8m41s                  kubelet, workernode2  Container image "calico/node:v3.9.6" already present on machine
  Normal   Created    8m41s                  kubelet, workernode2  Created container calico-node
  Warning  Unhealthy  8m39s                  kubelet, workernode2  Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/bird/bird.ctl: connect: no such file or directory
  Warning  Unhealthy  8m29s                  kubelet, workernode2  Readiness probe failed: calico/node is not ready: BIRD is not ready: BGP not established with 172.18.0.12020-08-14 11:15:36.748 [INFO][171] health.go 156: Number of node(s) with BGP peering established = 0
  Warning  Unhealthy  8m19s                  kubelet, workernode2  Readiness probe failed: calico/node is not ready: BIRD is not ready: BGP not established with 172.18.0.12020-08-14 11:15:46.744 [INFO][204] health.go 156: Number of node(s) with BGP peering established = 0
  Warning  Unhealthy  8m9s                   kubelet, workernode2  Readiness probe failed: calico/node is not ready: BIRD is not ready: BGP not established with 172.18.0.12020-08-14 11:15:56.735 [INFO][230] health.go 156: Number of node(s) with BGP peering established = 0
  Warning  Unhealthy  7m59s                  kubelet, workernode2  Readiness probe failed: calico/node is not ready: BIRD is not ready: BGP not established with 172.18.0.12020-08-14 11:16:06.732 [INFO][264] health.go 156: Number of node(s) with BGP peering established = 0
  Warning  Unhealthy  7m49s                  kubelet, workernode2  Readiness probe failed: calico/node is not ready: BIRD is not ready: BGP not established with 172.18.0.12020-08-14 11:16:16.757 [INFO][288] health.go 156: Number of node(s) with BGP peering established = 0
  Warning  Unhealthy  7m39s                  kubelet, workernode2  Readiness probe failed: calico/node is not ready: BIRD is not ready: BGP not established with 172.18.0.12020-08-14 11:16:26.733 [INFO][312] health.go 156: Number of node(s) with BGP peering established = 0
  Warning  Unhealthy  7m29s                  kubelet, workernode2  Readiness probe failed: calico/node is not ready: BIRD is not ready: BGP not established with 172.18.0.12020-08-14 11:16:36.717 [INFO][344] health.go 156: Number of node(s) with BGP peering established = 0
  Warning  Unhealthy  7m19s                  kubelet, workernode2  Readiness probe failed: calico/node is not ready: BIRD is not ready: BGP not established with 172.18.0.12020-08-14 11:16:46.781 [INFO][369] health.go 156: Number of node(s) with BGP peering established = 0
  Warning  Unhealthy  3m39s (x22 over 7m9s)  kubelet, workernode2  (combined from similar events): Readiness probe failed: calico/node is not ready: BIRD is not ready: BGP not established with 172.18.0.12020-08-14 11:20:26.758 [INFO][949] health.go 156: Number of node(s) with BGP peering established = 0
1 Like

Things got overly complicated very quickly here :thinking:

When you say 5 instances of your application do you mean 5 replicas with the exact same configuration and load balance requests?
Or are you trying to deploy 5 instances, each running their own configuration and accessed individually?

Kind regards,
Stephen

My apologies I went off topic related to my original post, but thanks to @thockin I realized that I had a networking issue with my Kubernetes cluster. I figured I’d benefit from his experience while trying to solve my issue.
To answer your question, Yes I have a Deployment which is creating 5 replicas but because I added this property spec.hostNetwork: true all my instances are getting the same IP as the workernode, but this will not help me benefit from distinct IP in addition I cannot run more than one instance on the same machine. If I remove this parameter then my Pods will get IP addresses that are not in my IP sub net network and I will not be able to access them. This post here is very similar to my case.

The pods shouldn’t be directly accessible on your subnet, they should be on the pod network inside the cluster.

If you want access you should expose using a Service, for example a NodePort, so you can hit <WORKER_NODE>:<NODEPORT> and Kubernetes will route requests to each of the pods.

Kind regards,
Stephen

Yes you are right, I can solve it by using service type Node-Port and I just tried it. Thank you so much stephendotcarter.

I have something for you to read.

https://speakerdeck.com/thockin/kubernetes-and-networks-why-is-this-so-dang-hard
https://speakerdeck.com/thockin/bringing-traffic-into-your-kubernetes-cluster

It’s ALMOST like lots of people struggle with this! :slight_smile:

Really though, you have to decide what network model you are using. If you use a “flat” model then your pods get IPs from the “real” subnet. If you use an “island” model, you have to figure out how to get traffic onto the island.

hostNetwork is designed as an escape hatch rather than the normal mode, because of the very problem you hit. If you need known-a-priori ports, hostNetwork is a bad choice for multiple replicas (unless you use SO_REUSEPORT, which is not what you want, I think).