ArgoCD unable to resolve FQDN of a pod

Hello guys.

I’m encountering a little problem with MicroK8s, specifically with ArgoCD.

I’m using MicroK8s on a Ubuntu VM:

microk8s is running
high-availability: no
  datastore master nodes: 127.0.0.1:19001
  datastore standby nodes: none
addons:
  enabled:
    argocd               # (community) Argo CD is a declarative continuous deployment for Kubernetes.
    community            # (core) The community addons repository
    dashboard            # (core) The Kubernetes dashboard
    dns                  # (core) CoreDNS
    ha-cluster           # (core) Configure high availability on the current node
    helm                 # (core) Helm - the package manager for Kubernetes
    helm3                # (core) Helm 3 - the package manager for Kubernetes
    host-access          # (core) Allow Pods connecting to Host services smoothly
    hostpath-storage     # (core) Storage class; allocates storage from host directory
    ingress              # (core) Ingress controller for external access
    metrics-server       # (core) K8s Metrics Server for API access to service metrics
    registry             # (core) Private image registry exposed on localhost:32000
    storage              # (core) Alias to hostpath-storage add-on, deprecated

I’ve successfully pushed a Docker image from the host to the built-in registry. Now, when I try to deploy the application with ArgoCD, I have this error on the pod:

Failed to pull image "registry.container-registry.svc.cluster.local:5000/testapp:latest": failed to pull and unpack image "registry.container-registry.svc.cluster.local:5000/testapp:latest": failed to resolve reference "registry.container-registry.svc.cluster.local:5000/testapp:latest": failed to do request: Head "https://registry.container-registry.svc.cluster.local:5000/v2/testapp/manifests/latest": dial tcp: lookup registry.container-registry.svc.cluster.local on 127.0.0.53:53: server misbehaving

Surfing the net I read that could be a DNS issue, so I exec’d into ArgoCD server pod to check /etc/resolv.conf:

pipodi@UbuntuVM:~$ kubectl exec -it argo-cd-argocd-server-6fdbc84579-nrc9f -n argocd -- cat /etc/resolv.conf
search argocd.svc.cluster.local svc.cluster.local cluster.local localdomain
nameserver 10.152.183.10
options ndots:5

So I searched for that IP:

pipodi@UbuntuVM:~$ kubectl get svc -o wide --all-namespaces | grep 10.152.183.10
kube-system          kube-dns                                   ClusterIP   10.152.183.10    <none>        53/UDP,53/TCP,9153/TCP   2d4h   k8s-app=kube-dns

So the DNS seems to be configured correctly. Moreso, executing a nslookup from another pod:

pipodi@UbuntuVM:~$ kubectl exec -it dnsutils -- nslookup registry.container-registry.svc.cluster.local
Server:		10.152.183.10
Address:	10.152.183.10#53

Name:	registry.container-registry.svc.cluster.local
Address: 10.152.183.58

And:

pipodi@UbuntuVM:~$ kubectl get svc -o wide --all-namespaces | grep 10.152.183.58
container-registry   registry                                   NodePort    10.152.183.58    <none>        5000:32000/TCP           2d4h   app=registry

If I change the FQDN on the Helm chart with the registry IP and port, Argo manages to deploy the application successfully.

Can you guys point me in the right direction here?