Is it possible pods talk to each other in

Hello World

I have a deployment with a replica more than one pod and a service type of NodePort. The goal is that I can scale up/down the number of pods during run time. So far I can see that each pod got its own hostname, however, they don’t see each other. The question is, how can I make each pod see each other??

What environment are you using? A cloud? Bare metal? DIY? What are you doing for networking?

Pods are supposed to be able to reach each other, that is the fundamental requirement. If that is broken, everything else will be broken, too.

Hi @thockin

I’m using AWS EKS, for the networking I’m using default configuration.
One thing to clarify the service can talk to another services as expected but the problem that I’m having is that the pods inside of this service (or deployment) cannot talk to each other


jpmolinamatute

    October 18

Hi @thockin

I’m using AWS EKS, for the networking I’m using default configuration.

One thing to clarify the service can talk to another services as expected but the problem that I’m having is that the pods inside of this service (or deployment) cannot talk to each other

What does this mean SPECIFICALLY? If you ping from one pod to another pod by IP address - it doesn’t work?

I just changed from a deployment to a StatefulSet and created two separate services. but I still have the same problem. I can’t reach other pods by hostname

interesting enough I can ping by IP address from one pod to another.

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: nifi
  labels:
    app: nifi
spec:
  serviceName: nifi
  replicas: 3
  template:
    metadata:
      labels:
        app.kubernetes.io/name: nifi-node
        app.kubernetes.io/instance: nifi-instance
    spec:
      containers:
        - name: nifi
          imagePullPolicy: Always
          image: xxxxxxxxxxxxx.dkr.ecr.us-east-2.amazonaws.com/nifi
          ports:
            - containerPort: 8080
              name: nifi-http
            - containerPort: 8082
              name: nifi-node
---
apiVersion: v1
kind: Service
metadata:
  name: nifi-web
  labels:
    app: nifi
spec:
  type: ClusterIP
  selector:
    app.kubernetes.io/name: nifi-node
    app.kubernetes.io/instance: nifi-instance
  ports:
    - name: http
      protocol: TCP
      port: 8080
      targetPort: nifi-http
---
apiVersion: v1
kind: Service
metadata:
  name: nifi
  labels:
    app: nifi
spec:
  clusterIP: None
  selector:
    app.kubernetes.io/name: nifi-node
    app.kubernetes.io/instance: nifi-instance
  ports:
    - name: tcp
      protocol: TCP
      port: 8082
      targetPort: nifi-node

Why should replicated pods “see” each other? Which hostname do you use to check this? I mean in pod A how do you try to connect to pod B, using which hostname? And another thing is, do you have some network policy?

Pods don’t generally have DNS resolvable names. That’s what Services are for. There are some affordances for this, but they are the exception, rather than the rule. Why do you want to resolve an individual pod?

Hi @acim

I’m working on a cluster zero master

in my pod I executer the hostname command and pass it to nifi, then each instance of nifi talk to zookeeper announcing itself at the same time each nifi instance read each other hostname from zookeeper.

then each instance of nifi fails because cannot reach other instances.

I don’t have a network policy

1 Like

is there a way to resolve each hostname from within each pod?

I’m working on cluster zero master and I want to be able to scale up and down “workers”

If you set the pod hostname and subdomain AND that subdomain is the name of a headless Service, you will get resolvable names, but be aware that in a replica set, Al replicas have the same name, so you need a statefulset or something else.

You can’t find other pod names unless you ask apiserver or something like zk. What kind of names do you get from zk? Do you get IP addresses from other pods?

Hi @thockin

I think we are on the right track however I’m a little confused about how FQDN is in Kubernetes/pod, I set a subdomain to match the same as the headless service and then I found this article about how FQDN is formed but this made me more confused than before.

this is the information that I have:

Namespace: bigdata
Headless Service: nifi-domain
StatefulSet: nifi-worker
Pod: nifi-worker-{0…3}
subdomain: nifi-domain

how can I build a resolvable name?

is it nifi-domain.bigdata.svc.nifi-domain? or is it nifi-domain.bigdata.svc.cluster.local?

thanks for the help! :slight_smile:

Hi @acid

a nifi instance gets whatever other nifi instances announce themself to ZK. so the flow is like this:

I have three pods with and their hostname are nifi-worker-1, nifi-worker-2 and nifi-worker-3. All of them announce themselves using those names to ZK. Then nifi-worker-1 gets a worker hostname, in this case, nifi-worker-3, then nifi-worker-1 tries to communicate to nifi-worker-3 but it fails because it cannot reach it.

if I do kubectl describe pod/nifi-worker-x I can see that each pod gets its own IP address, then I login into any of the worker pod and do a ping to an IP address of the other workers, it works!

https://v1-13.docs.kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods

If this is not going to work for you, I suggest to read IP addresses from Zookeeper, not hostnames. Zk must have IP addresses as well.


jpmolinamatute

    October 19

Hi @thockin

I think we are on the right track however I’m a little confused about how FQDN is in Kubernetes/pod, I set a subdomain to match the same as the headless service and then I found this article about how FQDN is formed but this made me more confused than before.

this is the information that I have:

Namespace: bigdata

Headless Service: nifi-domain

StatefulSet: nifi-worker

Pod: nifi-worker-{0…3}

subdomain: nifi-domain

how can I build a resolvable name?

X = Pod hostname (not pod name - there is a hostname field.

Y = Pod subdomain

Z = Namespace name

X.Y.Z.sv.cluster.local

1 Like

it worked!! thanks!!! :slight_smile: