Hi,
I would like to install kubernetes on 2 nodes: 1 control-plane and 1 worker.
The control-plane node has been bootstrapped using:
kubeadm init --config kubeadm.yaml
where kubeadm.yaml
:
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
clusterName: kubernetes
kubernetesVersion: 1.25.6
controlPlaneEndpoint: "<controlplane_ip>:6443"
networking:
podSubnet: "10.244.0.0/24" # Default
serviceSubnet: "10.96.0.0/16" # Default
dnsDomain: "cluster.local" # Default
controllerManager:
extraArgs:
bind-address: "0.0.0.0" # Required by Prometheus (kube-controller-manager)
etcd:
local:
extraArgs:
listen-metrics-urls: "http://0.0.0.0:2381" # Required by Prometheus (kube-etcd)
scheduler:
extraArgs:
bind-address: "0.0.0.0" # Required by Prometheus (kube-scheduler)
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
metricsBindAddress: "0.0.0.0:10249" # Required by Prometheus (kube-proxy)
The worker node is successfully joined using the output from kubeadm token create --print-join-command
:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
controlplane Ready control-plane 105s v1.25.6
worker0 Ready <none> 73s v1.25.6
I am able to schedule pods on the worker node.
If I try to test the dns resolution from the control-plane node, it works as expected:
$ kubectl run test \
--rm \
-i \
--tty \
--restart=Never \
--image=registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3 \
--overrides='{"spec":{"nodeName":"controlplane"}}' \
--command -- nslookup kubernetes.default
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1
Instead from the worker node, it fails:
$ kubectl run test \
--rm \
-i \
--tty \
--restart=Never \
--image=registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3 \
--overrides='{"spec":{"nodeName":"worker0"}}' \
--command -- nslookup kubernetes.default
;; connection timed out; no servers could be reached
Even after adding the line log
to thecoredns
configmap in the kube-system
namespace:
$ kubectl -n kube-system get configmap coredns -o yaml
apiVersion: v1
data:
Corefile: |
.:53 {
log
errors
health {
lameduck 5s
}
...
I can see that only the request from the control-plane are logged:
$ kubectl -n kube-system logs -l k8s-app=kube-dns -f
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] Reloading
[INFO] plugin/health: Going into lameduck mode for 5s
[INFO] plugin/reload: Running configuration SHA512 = c0af6acba93e75312d34dc3f6c44bf8573acff497d229202a4a49405ad5d8266c556ca6f83ba0c9e74088593095f714ba5b916d197aa693d6120af8451160b80
[INFO] Reloading complete
[INFO] 127.0.0.1:43648 - 2982 "HINFO IN 472780699182673355.8835750659342931197. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.003627335s
[INFO] 10.85.0.1:58639 - 8948 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000186882s
[INFO] 10.85.0.1:41957 - 4222 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109993s
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] Reloading
[INFO] plugin/health: Going into lameduck mode for 5s
[INFO] plugin/reload: Running configuration SHA512 = c0af6acba93e75312d34dc3f6c44bf8573acff497d229202a4a49405ad5d8266c556ca6f83ba0c9e74088593095f714ba5b916d197aa693d6120af8451160b80
[INFO] Reloading complete
[INFO] 127.0.0.1:44066 - 8640 "HINFO IN 7453388407502491512.6627858125705336067. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.006051863s
I have tried to disable SELinux, disable firewalld, reboot, just use the default kubeadm init
but nothing changed
Any ideas?
Thanks
Cluster information:
Kubernetes version: 1.25.6
Cloud being used: bare-metal
Installation method: kubeadm
Host OS: Fedora 37
CRI and version: cri-o-1.25