Cluster information:
Kubernetes version: 1.21.2
Cloud being used: bare-metal
Installation method: Ansible playbooks
Host OS: Debian 10
CNI and version: Calico 3.19.1
CRI and version:
Ingress controller: Nginx
Firewall on the hosts of the nodes (allow list)
I’m currently having a problem with a deployment and can’t figure out the problem at this time. The problem is that, the endpoints of a service are not automatically bound and the selector labels match.
Furthermore, in my K8s cluster, some successful Ingress routes (different namespace) and the associated automatic endpoint assignment are already successful, so I conclude that there can’t be a problem with my basic setup. In addition, I can manually enter the appropriate IP address of the pod into the endpoint and subsequently the function is guaranteed.
Deployment setup:
- Tool: Helm
Deployment helm chart:
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ .Values.basic.name }}-de"
namespace: "{{ .Values.basic.namespace }}"
labels:
app.kubernetes.io/name: "{{ .Values.basic.name }}"
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: "{{ .Values.basic.name }}"
template:
metadata:
labels:
app.kubernetes.io/name: "{{ .Values.basic.name }}"
spec:
containers:
- name: "{{ .Values.basic.database.name }}"
image: "{{ .Values.docker.database.image }}:{{ .Values.docker.database.tag }}"
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
securityContext:
runAsUser: 500
env:
- name: LOGGING_REDIS_HOST
value: "144.91.86.56"
- name: LOGGING_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: "{{ .Values.basic.database.name }}-sc"
key: LOGGING_REDIS_PASSWORD
- name: POSTGRES_INITDB_NAME
value: "{{ .Values.config.database.POSTGRES_INITDB_NAME }}"
- name: POSTGRES_INITDB_ROOT_USERNAME
value: "{{ .Values.config.database.POSTGRES_INITDB_ROOT_USERNAME }}"
- name: POSTGRES_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: "{{ .Values.basic.database.name }}-sc"
key: POSTGRES_INITDB_ROOT_PASSWORD
- name: POSTGRES_INITDB_MONITORING_USERNAME
value: "{{ .Values.config.database.POSTGRES_INITDB_MONITORING_USERNAME }}"
- name: POSTGRES_INITDB_MONITORING_PASSWORD
valueFrom:
secretKeyRef:
name: "{{ .Values.basic.database.name }}-sc"
key: POSTGRES_INITDB_MONITORING_PASSWORD
- name: POSTGRES_INITDB_USER_USERNAME
value: "{{ .Values.config.database.POSTGRES_INITDB_USER_USERNAME }}"
- name: POSTGRES_INITDB_USER_PASSWORD
valueFrom:
secretKeyRef:
name: "{{ .Values.basic.database.name }}-sc"
key: POSTGRES_INITDB_USER_PASSWORD
imagePullPolicy: Always
volumeMounts:
- mountPath: /storage
name: "{{ .Values.basic.database.name }}-storage"
readOnly: false
- name: "{{ .Values.basic.app.name }}"
image: "{{ .Values.docker.app.image }}:{{ .Values.docker.app.tag }}"
resources:
requests:
memory: "1024Mi"
cpu: "250m"
limits:
memory: "2048Mi"
cpu: "500m"
securityContext:
runAsUser: 500
env:
- name: LOGGING_REDIS_HOST
value: "144.91.86.56"
- name: LOGGING_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: "{{ .Values.basic.app.name }}-sc"
key: LOGGING_REDIS_PASSWORD
- name: JDBC_USER
value: "{{ .Values.config.database.POSTGRES_INITDB_USER_USERNAME }}"
- name: JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: "{{ .Values.basic.app.name }}-sc"
key: JDBC_PASSWORD
- name: JDBC_URL
value: "{{ .Values.config.app.JDBC_URL }}"
imagePullPolicy: Always
volumeMounts:
- mountPath: /storage
name: "{{ .Values.basic.app.name }}-storage"
readOnly: false
imagePullSecrets:
- name: "docker-registry-{{ .Values.basic.namespace }}-sc"
volumes:
- name: "{{ .Values.basic.database.name }}-storage"
persistentVolumeClaim:
claimName: "gluster-{{ .Values.basic.database.name }}-{{ .Values.basic.namespace }}-pvc"
- name: "{{ .Values.basic.app.name }}-storage"
persistentVolumeClaim:
claimName: "gluster-{{ .Values.basic.app.name }}-{{ .Values.basic.namespace }}-pvc"
securityContext:
fsGroup: 500
Service helm chart:
apiVersion: v1
kind: Service
metadata:
name: "{{ .Values.basic.name }}-sv"
namespace: "{{ .Values.basic.namespace }}"
labels:
app.kubernetes.io/name: "{{ .Values.basic.name }}"
spec:
type: ClusterIP
ports:
- port: 9187
targetPort: http
protocol: TCP
name: metrics
- port: 9000
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: "{{ .Values.basic.name }}"
Now the output of my K8s cluster:
kubectl get pods -l app.kubernetes.io/name=sonarqube -n development
NAME READY STATUS RESTARTS AGE
sonarqube-de-b47bd9f75-tsbxc 2/2 Running 0 2d11h
kubectl get endpoints sonarqube-sv -n development
NAME ENDPOINTS AGE
sonarqube-sv <none> 3d10h
kubectl get pods -l app.kubernetes.io/name=sonarqube -n development --show-labels
NAME READY STATUS RESTARTS AGE LABELS
sonarqube-de-b47bd9f75-tsbxc 2/2 Running 0 3d11h app.kubernetes.io/name=sonarqube,pod-template-hash=b47bd9f75
kubectl get endpointslices -n development --show-labels
NAME ADDRESSTYPE PORTS ENDPOINTS AGE LABELS
sonarqube-sv-fgsg2 IPv4 <unset> 192.168.202.213 3d11h endpointslice.kubernetes.io/managed-by=endpointslice-controller.k8s.io,kubernetes.io/service-name=sonarqube-sv
kubectl get deployment sonarqube-de -n development --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
sonarqube-de 1/1 1 1 4d10h app.kubernetes.io/managed-by=Helm,app.kubernetes.io/name=sonarqube
I’ve already checked the EndpointSlices and find out that the IP of the port is listed inside the type. If I add the port manually, the Reconciler reset the value.
If you need further information, please feel free to ask. Is there a way to enable a debug flag to get more information about the failing processes? I hope, we can solve the issue!