New to K8S I have installed microk8s on three Raspberry Pi 4 computers (blackpi, bluepi, redpi)
- a little hello world REST application as docker image in a local registry
- deployed it as a service with 3 replicas
- only one of these replicas is being used and the other two are not being accessed correctly.
Here my config files:
joerg@blackpi:~/build$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app.kubernetes.io/name: hellow
spec:
replicas: 3
selector:
matchLabels:
app.kubernetes.io/name: hellow
template:
metadata:
labels:
app.kubernetes.io/name: hellow
spec:
containers:
- name: hellow
image: blackpi:5000/hellow:registry
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
joerg@blackpi:~/build$ cat service.yaml
apiVersion: v1
kind: Service
metadata:
name: hello-service
labels:
app.kubernetes.io/name: hellow
spec:
selector:
app.kubernetes.io/name: hellow
ports:
- protocol: TCP
port: 8080
targetPort: 8080
joerg@blackpi:~/build$ cat ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-ingress
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hello-service
port:
number: 8080
All three files are applied:
joerg@blackpi:~/build$ microk8s kubectl apply -f deployment.yaml
deployment.apps/hello-deployment created
joerg@blackpi:~/build$ microk8s kubectl apply -f service.yaml
service/hello-service created
joerg@blackpi:~/build$ microk8s kubectl apply -f ingress.yaml
ingress.networking.k8s.io/hello-ingress created
This is what kubectl get
says:
joerg@blackpi:~/build$ microk8s kubectl get nodes,pods,services,ingresses,endpoints,deployments,replicasets -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node/redpi Ready <none> 31d v1.27.6 192.168.178.110 <none> Ubuntu 22.04.3 LTS 5.15.0-1041-raspi containerd://1.6.15
node/blackpi Ready <none> 31d v1.27.6 192.168.178.109 <none> Ubuntu 22.04.3 LTS 5.15.0-1040-raspi containerd://1.6.15
node/bluepi Ready <none> 31d v1.27.6 192.168.178.111 <none> Ubuntu 22.04.3 LTS 5.15.0-1041-raspi containerd://1.6.15
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/hello-deployment-687697cc54-cb9lh 1/1 Running 0 28m 10.1.224.110 bluepi <none> <none>
pod/hello-deployment-687697cc54-lms78 1/1 Running 0 28m 10.1.175.187 redpi <none> <none>
pod/hello-deployment-687697cc54-sz85h 1/1 Running 0 28m 10.1.239.148 blackpi <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 31d <none>
service/hello-service ClusterIP 10.152.183.125 <none> 8080/TCP 28m app.kubernetes.io/name=hellow
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/hello-ingress nginx * 127.0.0.1 80 28m
NAME ENDPOINTS AGE
endpoints/kubernetes 192.168.178.109:16443 31d
endpoints/hello-service 10.1.175.187:8080,10.1.224.110:8080,10.1.239.148:8080 28m
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/hello-deployment 3/3 3 3 28m hellow blackpi:5000/hellow:registry app.kubernetes.io/name=hellow
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/hello-deployment-687697cc54 3 3 3 28m hellow blackpi:5000/hellow:registry app.kubernetes.io/name=hellow,pod-template-hash=687697cc54
I now try to consume the service:
joerg@blackpi:~/build$ time wget localhost:80/greeting -O - -o /dev/null
{"id":18,"content":"Hello, World!"}
real 0m10.034s
user 0m0.002s
sys 0m0.015s
I repeat the above and I consistently see 10 seconds of real time consumption.
Could this delay be caused by the service trying to use replicas it somehow cannot reach?
Only ever one single replica (the one on blackpi) is being used.
Where is the mistake in my configuration?
If I consume the service directly from the replica, I can see it work as expected.
Example on blackpi
:
joerg@blackpi:~/build$ time wget 10.1.239.148:8080/greeting -O - -o /dev/null
{"id":19,"content":"Hello, World!"}
real 0m0.026s
user 0m0.000s
sys 0m0.013s
Same on the other two nodes: when logging in on a node I can consume the service of the pod that’s running there just fine.