Joined Windows Worker Node, Internal Networking Not Working

I have an on-premises Linux cluster (nodes running on Ubuntu 18.04) that is running well, all nodes are hosted on a VSphere server (ESXi 5.5.0 Update 2). I have exposed services via NodePort as well as through Ingress rules with Nginx-Ingress-Controller. The cluster is fronted with HAProxy that runs on another machine, outside the cluster. All nodes are running version 1.17.0 of the Kubernetes binaries.

I have setup a Windows Server 2019 Datacenter (version 1809, build 17763.914) instance on the same VSphere server and have completed the process of installing Kubernetes and joining it to the cluster. It’s also running version 1.17.0 of the Kubernetes binaries.

Things looked good at first but I can’t connect to any of the NodePorts on this Windows worker. I deployed an IIS image and it looks like it’s running but I can’t connect to that from either the Windows worker’s external IP (from another machine) or through the exposed NodePort on any of the other nodes in the cluster. I can’t ping the internal IP address of pods on the Windows worker from other nodes in the cluster, either. Here is some info on the Windows node and pod.

Looking through the log files everything looks good to me, I don’t see any errors. All three processes are running on the Windows worker (flanneld.eve, kubelet.exe and kube-proxy.exe). I have a Linux node that is running busybox, I’ve been using that to test network connectivity inside of the cluster. The node itself is reported as being “Ready” and it’s networking information look good.

Lastly, I’ve worked through the troubleshooting hints from the documentation and I’m not sure what to do next.

If anyone has any tips or hints, I would really appreciate it!