Pod can not reach any domain

Hello,

have problems with my new deployed baremetal microk8s 1.22 with 2 nodes. The pods can´t reach any domain.

For troubleshooting i deployed the dnsutils-pod and tryed a nslookup

kubectl exec -i -t dnsutils -- nslookup google.com 
;; connection timed out; no servers could be reached
 
command terminated with exit code 1

I’ve searched the web for serveral hours, but i don´t know what i can do to fix the problem…

Here are maybe some interesting outputs:

kubectl get no
NAME                   STATUS   ROLES    AGE   VERSION
node01     Ready    <none>   25h   v1.22.6-3+7ab10db7034594
node02   Ready    <none>   25h   v1.22.6-3+7ab10db7034594
microk8s status
microk8s is running
high-availability: no
  datastore master nodes: 192.168.111.238:19001
  datastore standby nodes: none
addons:
  enabled:
    dashboard            # The Kubernetes dashboard
    dns                  # CoreDNS
    ha-cluster           # Configure high availability on the current node
    helm3                # Helm 3 - Kubernetes package manager
    ingress              # Ingress controller for external access
    metallb              # Loadbalancer for your Kubernetes cluster
    metrics-server       # K8s Metrics Server for API access to service metrics
    rbac                 # Role-Based Access Control for authorisation
    registry             # Private image registry exposed on localhost:32000
    storage              # Storage class; allocates storage from host directory
  disabled:
    ambassador           # Ambassador API Gateway and Ingress
    cilium               # SDN, fast with full network policy
    fluentd              # Elasticsearch-Fluentd-Kibana logging and monitoring
    gpu                  # Automatic enablement of Nvidia CUDA
    helm                 # Helm 2 - the package manager for Kubernetes
    host-access          # Allow Pods connecting to Host services smoothly
    istio                # Core Istio service mesh services
    jaeger               # Kubernetes Jaeger operator with its simple config
    kata                 # Kata Containers is a secure runtime with lightweight VMS
    keda                 # Kubernetes-based Event Driven Autoscaling
    knative              # The Knative framework on Kubernetes.
    kubeflow             # Kubeflow for easy ML deployments
    linkerd              # Linkerd is a service mesh for Kubernetes and other frameworks
    multus               # Multus CNI enables attaching multiple network interfaces to pods
    openebs              # OpenEBS is the open-source storage solution for Kubernetes
    openfaas             # openfaas serverless framework
    portainer            # Portainer UI for your Kubernetes cluster
    prometheus           # Prometheus operator for monitoring and logging
    traefik              # traefik Ingress controller for external access
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  Corefile: ".:53 {\n    errors\n    health {\n      lameduck 5s\n    }\n    ready\n
    \   log . {\n      class error\n    }\n    kubernetes cluster.local in-addr.arpa
    ip6.arpa {\n      pods insecure\n      fallthrough in-addr.arpa ip6.arpa\n    }\n
    \   prometheus :9153\n    forward . 192.168.111.1 192.168.111.10 5.1.66.255 \n
    \   cache 30\n    loop\n    reload\n    loadbalance\n}\n"
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"Corefile":".:53 {\n    errors\n    health {\n      lameduck 5s\n    }\n    ready\n    log . {\n      class error\n    }\n    kubernetes cluster.local in-addr.arpa ip6.arpa {\n      pods insecure\n      fallthrough in-addr.arpa ip6.arpa\n    }\n    prometheus :9153\n    forward . 192.168.111.1 192.168.111.10 5.1.66.255 \n    cache 30\n    loop\n    reload\n    loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists","k8s-app":"kube-dns"},"name":"coredns","namespace":"kube-system"}}
  creationTimestamp: "2022-02-15T15:16:44Z"
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
    k8s-app: kube-dns
  name: coredns
  namespace: kube-system
  resourceVersion: "219072"
  selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
  uid: 9e621410-3260-461b-9508-0efacbb0fd88
kubectl get pods -n kube-system -l k8s-app=kube-dns -o wide
NAME                       READY   STATUS    RESTARTS      AGE    IP             NODE                 NOMINATED NODE   READINESS GATES
coredns-7f9c69c78c-nmww2   1/1     Running   1 (52m ago)   102m   10.1.133.174   node01   <none>           <none>

Scaling coredns to 2 replicas brought the solution.

kubectl scale --current-replicas=1 --replicas=2 deployment/coredns -n kube-system

But why does this not work automatically?

Have I configured something wrong?

I have a 3-node system that’s working fine, and coredns there has only 1 replica.

I don’t have any ideas about what could be wrong on yours though.

@adlerspj Thanks for your reply.

I set up microk8s completely new, but i have still the same issue.

~$ kubectl exec -i -t dnsutils -- nslookup google.com
Server:         10.152.183.10
Address:        10.152.183.10#53

Non-authoritative answer:
Name:   google.com
Address: 216.58.212.142

~$ kubectl exec -i -t dnsutils2 -- nslookup google.com
;; connection timed out; no servers could be reached

command terminated with exit code 1

~$ kubectl get pods -n kube-system -l k8s-app=kube-dns -o wide
NAME                       READY   STATUS    RESTARTS      AGE     IP             NODE                 NOMINATED NODE   READINESS GATES
coredns-7f9c69c78c-zkzw2   1/1     Running   1 (90m ago)   4d20h   10.1.133.138   node01   <none>           <none>

~$ kubectl get all -o wide | grep dns
pod/dnsutils                               1/1     Running   0             5m59s   10.1.133.141   node01     <none>           <none>
pod/dnsutils2                              1/1     Running   0             4m51s   10.1.222.77    node02   <none>           <none>

Please help

One of my nodes is very sluggish, and not all the addons I want are enabled on it.

Could this be the reason for my problem and why are the same addons not activated on all nodes even though they have been joined to a cluster via add-node?

node01:

~$ microk8s status
microk8s is running
high-availability: no
  datastore master nodes: 192.168.1.238:19001
  datastore standby nodes: none
addons:
  enabled:
    dashboard            # The Kubernetes dashboard
    dns                  # CoreDNS
    ha-cluster           # Configure high availability on the current node
    helm3                # Helm 3 - Kubernetes package manager
    host-access          # Allow Pods connecting to Host services smoothly
    ingress              # Ingress controller for external access
    metallb              # Loadbalancer for your Kubernetes cluster
    metrics-server       # K8s Metrics Server for API access to service metrics
    rbac                 # Role-Based Access Control for authorisation
    registry             # Private image registry exposed on localhost:32000
    storage              # Storage class; allocates storage from host directory
  disabled:
    ambassador           # Ambassador API Gateway and Ingress
    cilium               # SDN, fast with full network policy
    fluentd              # Elasticsearch-Fluentd-Kibana logging and monitoring
    gpu                  # Automatic enablement of Nvidia CUDA
    helm                 # Helm 2 - the package manager for Kubernetes
    istio                # Core Istio service mesh services
    jaeger               # Kubernetes Jaeger operator with its simple config
    kata                 # Kata Containers is a secure runtime with lightweight VMS
    keda                 # Kubernetes-based Event Driven Autoscaling
    knative              # The Knative framework on Kubernetes.
    kubeflow             # Kubeflow for easy ML deployments
    linkerd              # Linkerd is a service mesh for Kubernetes and other frameworks
    multus               # Multus CNI enables attaching multiple network interfaces to pods
    openebs              # OpenEBS is the open-source storage solution for Kubernetes
    openfaas             # openfaas serverless framework
    portainer            # Portainer UI for your Kubernetes cluster
    prometheus           # Prometheus operator for monitoring and logging
    traefik              # traefik Ingress controller for external access

node02:

~$ microk8s status
microk8s is running
high-availability: no
  datastore master nodes: 192.168.1.238:19001
  datastore standby nodes: none
addons:
  enabled:
    dashboard            # The Kubernetes dashboard
    dns                  # CoreDNS
    ha-cluster           # Configure high availability on the current node
    ingress              # Ingress controller for external access
    metallb              # Loadbalancer for your Kubernetes cluster
    metrics-server       # K8s Metrics Server for API access to service metrics
    rbac                 # Role-Based Access Control for authorisation
    registry             # Private image registry exposed on localhost:32000
    storage              # Storage class; allocates storage from host directory
  disabled:
    ambassador           # Ambassador API Gateway and Ingress
    cilium               # SDN, fast with full network policy
    fluentd              # Elasticsearch-Fluentd-Kibana logging and monitoring
    gpu                  # Automatic enablement of Nvidia CUDA
    helm                 # Helm 2 - the package manager for Kubernetes
    helm3                # Helm 3 - Kubernetes package manager
    host-access          # Allow Pods connecting to Host services smoothly
    istio                # Core Istio service mesh services
    jaeger               # Kubernetes Jaeger operator with its simple config
    kata                 # Kata Containers is a secure runtime with lightweight VMS
    keda                 # Kubernetes-based Event Driven Autoscaling
    knative              # The Knative framework on Kubernetes.
    kubeflow             # Kubeflow for easy ML deployments
    linkerd              # Linkerd is a service mesh for Kubernetes and other frameworks
    multus               # Multus CNI enables attaching multiple network interfaces to pods
    openebs              # OpenEBS is the open-source storage solution for Kubernetes
    openfaas             # openfaas serverless framework
    portainer            # Portainer UI for your Kubernetes cluster
    prometheus           # Prometheus operator for monitoring and logging
    traefik              # traefik Ingress controller for external access