I have a deployment with a replica more than one pod and a service type of NodePort. The goal is that I can scale up/down the number of pods during run time. So far I can see that each pod got its own hostname, however, they don’t see each other. The question is, how can I make each pod see each other??
I’m using AWS EKS, for the networking I’m using default configuration.
One thing to clarify the service can talk to another services as expected but the problem that I’m having is that the pods inside of this service (or deployment) cannot talk to each other
I’m using AWS EKS, for the networking I’m using default configuration.
One thing to clarify the service can talk to another services as expected but the problem that I’m having is that the pods inside of this service (or deployment) cannot talk to each other
What does this mean SPECIFICALLY? If you ping from one pod to another pod by IP address - it doesn’t work?
I just changed from a deployment to a StatefulSet and created two separate services. but I still have the same problem. I can’t reach other pods by hostname
interesting enough I can ping by IP address from one pod to another.
Why should replicated pods “see” each other? Which hostname do you use to check this? I mean in pod A how do you try to connect to pod B, using which hostname? And another thing is, do you have some network policy?
Pods don’t generally have DNS resolvable names. That’s what Services are for. There are some affordances for this, but they are the exception, rather than the rule. Why do you want to resolve an individual pod?
in my pod I executer the hostname command and pass it to nifi, then each instance of nifi talk to zookeeper announcing itself at the same time each nifi instance read each other hostname from zookeeper.
then each instance of nifi fails because cannot reach other instances.
If you set the pod hostname and subdomain AND that subdomain is the name of a headless Service, you will get resolvable names, but be aware that in a replica set, Al replicas have the same name, so you need a statefulset or something else.
You can’t find other pod names unless you ask apiserver or something like zk. What kind of names do you get from zk? Do you get IP addresses from other pods?
I think we are on the right track however I’m a little confused about how FQDN is in Kubernetes/pod, I set a subdomain to match the same as the headless service and then I found this article about how FQDN is formed but this made me more confused than before.
a nifi instance gets whatever other nifi instances announce themself to ZK. so the flow is like this:
I have three pods with and their hostname are nifi-worker-1, nifi-worker-2 and nifi-worker-3. All of them announce themselves using those names to ZK. Then nifi-worker-1 gets a worker hostname, in this case, nifi-worker-3, then nifi-worker-1 tries to communicate to nifi-worker-3 but it fails because it cannot reach it.
if I do kubectl describe pod/nifi-worker-x I can see that each pod gets its own IP address, then I login into any of the worker pod and do a ping to an IP address of the other workers, it works!
I think we are on the right track however I’m a little confused about how FQDN is in Kubernetes/pod, I set a subdomain to match the same as the headless service and then I found this article about how FQDN is formed but this made me more confused than before.
this is the information that I have:
Namespace: bigdata
Headless Service: nifi-domain
StatefulSet: nifi-worker
Pod: nifi-worker-{0…3}
subdomain: nifi-domain
how can I build a resolvable name?
X = Pod hostname (not pod name - there is a hostname field.