Containers on slave nodes can't communicate with apiserver

network
#1

Cluster information:

Kubernetes version: 1.14.1
Cloud being used: bare-metal
Installation method: Manual
Host OS: CentOs 7
CNI and version: quay.io/coreos/flannel:v0.11.0-amd64

I’m building a bare-metal K8s installation, but the containers on the slave nodes can’t seem to communicate with the api server.

From dashboard log:
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service account's configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://#.#.#.#:443/version: dial tcp #.#.#.#:443: i/o timeout\n","stream":"stdout",

I see this a lot when searching for the text on the web. Note that, when I perform a curl of https://#.#.#.#:443 from the command line of that node, I get the expected version information.

From coredns log:
Failed to list *v1.Service: Get https://172.20.0.1:443/api/v1/services?limit=500\u0026resourceVersion=0: dial tcp 172.20.0.1:443: i/o timeout\n"

Curl from the command line returns:
"message": "services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope",

Note that, when I can force the dashboard to run on the master node, it behaves correctly. Can someone suggest how I can troubleshoot this?