Hello,
have problems with my new deployed baremetal microk8s 1.22 with 2 nodes. The pods can´t reach any domain.
For troubleshooting i deployed the dnsutils-pod and tryed a nslookup
kubectl exec -i -t dnsutils -- nslookup google.com
;; connection timed out; no servers could be reached
command terminated with exit code 1
I’ve searched the web for serveral hours, but i don´t know what i can do to fix the problem…
Here are maybe some interesting outputs:
kubectl get no
NAME STATUS ROLES AGE VERSION
node01 Ready <none> 25h v1.22.6-3+7ab10db7034594
node02 Ready <none> 25h v1.22.6-3+7ab10db7034594
microk8s status
microk8s is running
high-availability: no
datastore master nodes: 192.168.111.238:19001
datastore standby nodes: none
addons:
enabled:
dashboard # The Kubernetes dashboard
dns # CoreDNS
ha-cluster # Configure high availability on the current node
helm3 # Helm 3 - Kubernetes package manager
ingress # Ingress controller for external access
metallb # Loadbalancer for your Kubernetes cluster
metrics-server # K8s Metrics Server for API access to service metrics
rbac # Role-Based Access Control for authorisation
registry # Private image registry exposed on localhost:32000
storage # Storage class; allocates storage from host directory
disabled:
ambassador # Ambassador API Gateway and Ingress
cilium # SDN, fast with full network policy
fluentd # Elasticsearch-Fluentd-Kibana logging and monitoring
gpu # Automatic enablement of Nvidia CUDA
helm # Helm 2 - the package manager for Kubernetes
host-access # Allow Pods connecting to Host services smoothly
istio # Core Istio service mesh services
jaeger # Kubernetes Jaeger operator with its simple config
kata # Kata Containers is a secure runtime with lightweight VMS
keda # Kubernetes-based Event Driven Autoscaling
knative # The Knative framework on Kubernetes.
kubeflow # Kubeflow for easy ML deployments
linkerd # Linkerd is a service mesh for Kubernetes and other frameworks
multus # Multus CNI enables attaching multiple network interfaces to pods
openebs # OpenEBS is the open-source storage solution for Kubernetes
openfaas # openfaas serverless framework
portainer # Portainer UI for your Kubernetes cluster
prometheus # Prometheus operator for monitoring and logging
traefik # traefik Ingress controller for external access
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
Corefile: ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n
\ log . {\n class error\n }\n kubernetes cluster.local in-addr.arpa
ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n }\n
\ prometheus :9153\n forward . 192.168.111.1 192.168.111.10 5.1.66.255 \n
\ cache 30\n loop\n reload\n loadbalance\n}\n"
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"Corefile":".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n log . {\n class error\n }\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n }\n prometheus :9153\n forward . 192.168.111.1 192.168.111.10 5.1.66.255 \n cache 30\n loop\n reload\n loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists","k8s-app":"kube-dns"},"name":"coredns","namespace":"kube-system"}}
creationTimestamp: "2022-02-15T15:16:44Z"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
k8s-app: kube-dns
name: coredns
namespace: kube-system
resourceVersion: "219072"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: 9e621410-3260-461b-9508-0efacbb0fd88
kubectl get pods -n kube-system -l k8s-app=kube-dns -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-7f9c69c78c-nmww2 1/1 Running 1 (52m ago) 102m 10.1.133.174 node01 <none> <none>