Temporary failure in name resolution

Cluster information:

Kubernetes version: 1.19.2
Cloud being used: private
Installation method: kubeadm init
Host OS: ubuntu 18.04
CNI and version: Calico: 0.3.1
CRI and version: Docker: 19.03.13

You can format your yaml by highlighting it and pressing Ctrl-Shift-C, it will make your output easier to read.

Hello together,

I am currently trying to setup a k8s cluster. But I run into trouble, my master node is connected to the internet and accessible by public. The other working nodes, currently 3 more, are connected by a separate network interface, which is private.

Everything seems to work fine, except that my nodes persist in the mode “NotReady”.
Checking the proxy pod on the added worker node with the following command
kubectl describe pod kube-proxy-828vb -n kube-system

leads to the subsquent errors:

Events:
Type Reason Age From Message


Normal Scheduled 28m default-scheduler Successfully assigned kube-system/kube-proxy-828vb to k8snode1
Warning FailedCreatePodSandBox 6m56s (x80 over 31m) kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed pulling image “k8s.gcr.io/pause:3.2”: Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io: Temporary failure in name resolution

I am pretty new to kubernetes. Any help would be appreciated.
Best Regards

Okay, I was capable of solving the problem my own. I set up a forward proxy with squid.
After installing squid, I bound the proxy server to the certain interface so that the proxy server is invisible to the public, listing on the default port by adjusting the following file:

vim /etc/squid/squid.conf

with the following line

http_port 10.0.0.1:3128

Afterwards, I configured the docker on the local worker nodes as follows:

vim /etc/systemd/system/docker.service.d/http-proxy.conf

Subsequently, the file looks like this:

[Service]
Environment=“HTTP_PROXY=http://10.0.0.1:3128
Environment=“HTTPS_PROXY=http://10.0.0.1:3128
Environment="NO_PROXY=“localhost,127.0.0.1,::1”

sudo systemctl daemon-reload
sudo systemctl restart docker
systemctl show --property=Environment docker

Https proxy is set to http, otherwise docker is not capable of pulling the images, why ever o.O

Doing so, all necessary images can be pulled by the local worker nodes. A other solution is to save the images, required by k8s and distribute them across all local workers.

Best Regards