What happened?
Recently , eanbled dns cache on host itself where update /etc/resolv.conf files as below :
search aaa.bbb.com ccc.com
nameserver 127.0.0.1
nameserver A.A.A.A
nameserver B.B.B.B
options ends0 timeout:3
Once core dns pod running it will copy records from host . And nameserver 127.0.0.1 copied as first nameserver where trigger issue happen. Other pods who query dns from core dns pod will stuck and not get correct ip from dns name. I tested on pure docker env as well. it can auto filter not correct setting in /etc/resolv.conf other than just copy it all. Docker can ignore nameserver 127.0.0.1 or nameserver 0.0.0.0 setting there.
What did you expect to happen?
Read content and filter not correct settings in /etc/resolv.conf just as mechanism of docker
How can we reproduce it (as minimally and precisely as possible)?
Just enable copy above /etc/resolv.conf and re-deployment core-dns
Anything else we need to know?
No response
Kubernetes version
kubernetes version: v1.24.10