I’m new to k8s and I’m having slight trouble setting up a local cluster on virtual machines and have it be accessible from at least my host machine.
setup.
- I’ve a local cluster consisting of three debian virtual machines with 1 master and 2 workers. I had created a virtual interface with CIDR
10.0.0.0/16
and I had set the dhcp domain to beabcd.com
. - I created my cluster I set
clusterDNS: abcd.com
,podCIDR: 69.96.0.0/16
andserviceCIDR: 69.97.0.0/16
. I am using cilium CNI & coreDNS corefile.
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes abcd.com in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
- I deployed a
nginx-pod
in default namespace and created a servicenginx-svc
of typeNodePort
that exposednginx-pod
on port80
. I created an ingress rule that had a rule to forwardabdc.com
tonginx-svc
. - I then installed
ingress-nginx-controller
logs.
- Corefile for coreDNS:
ayush@ip-10-0-191-144:~$ kubectl describe ingress/example
Name: example
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
abcd.com
/ nginx-svc:80 (69.96.1.111:80)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 30m (x5 over 11h) nginx-ingress-controller Scheduled for sync
- services:
ayush@ip-10-0-191-144:~$ kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 69.97.0.1 <none> 443/TCP 2d14h
default nginx-svc NodePort 69.97.216.124 <none> 80:30744/TCP 26h
ingress-nginx ingress-nginx-controller LoadBalancer 69.97.88.241 <pending> 80:31565/TCP,443:32581/TCP 2d2h
ingress-nginx ingress-nginx-controller-admission ClusterIP 69.97.200.46 <none> 443/TCP 2d2h
kube-system cilium-agent ClusterIP None <none> 9964/TCP 2d14h
kube-system hubble-metrics ClusterIP None <none> 9965/TCP 2d14h
kube-system hubble-peer ClusterIP 69.97.190.83 <none> 443/TCP 2d14h
kube-system hubble-relay ClusterIP 69.97.188.92 <none> 80/TCP 2d14h
kube-system hubble-ui ClusterIP 69.97.9.207 <none> 80/TCP 2d14h
kube-system kube-dns ClusterIP 69.97.0.10 <none> 53/UDP,53/TCP,9153/TCP 2d14h
- pods:
ayush@ip-10-0-191-144:~$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default dnsutils 1/1 Running 1 (11h ago) 11h 69.96.2.47 ip-10-0-94-54 <none> <none>
default nginx-pod 1/1 Running 4 (11h ago) 26h 69.96.1.111 ip-10-0-38-41 <none> <none>
ingress-nginx ingress-nginx-controller-55474d95c5-f4kfx 1/1 Running 4 (11h ago) 2d3h 69.96.1.80 ip-10-0-38-41 <none> <none>
kube-system cilium-428bt 1/1 Running 7 (11h ago) 2d14h 10.0.38.41 ip-10-0-38-41 <none> <none>
kube-system cilium-8hpbb 1/1 Running 4 (11h ago) 2d14h 10.0.191.144 ip-10-0-191-144 <none> <none>
kube-system cilium-cxgnc 1/1 Running 6 (11h ago) 2d14h 10.0.94.54 ip-10-0-94-54 <none> <none>
kube-system cilium-operator-6cdc4568cb-gg58q 1/1 Running 39 (11h ago) 2d14h 10.0.38.41 ip-10-0-38-41 <none> <none>
kube-system cilium-operator-6cdc4568cb-k7nsd 1/1 Running 4 (11h ago) 2d14h 10.0.191.144 ip-10-0-191-144 <none> <none>
kube-system coredns-76f75df574-45f7b 1/1 Running 9 (11h ago) 2d14h 69.96.0.43 ip-10-0-191-144 <none> <none>
kube-system coredns-76f75df574-77m4h 1/1 Running 9 (11h ago) 2d14h 69.96.0.221 ip-10-0-191-144 <none> <none>
kube-system etcd-ip-10-0-191-144 1/1 Running 11 (11h ago) 2d14h 10.0.191.144 ip-10-0-191-144 <none> <none>
kube-system hubble-relay-d8b6b55c9-st57x 1/1 Running 9 (11h ago) 2d11h 69.96.2.197 ip-10-0-94-54 <none> <none>
kube-system hubble-ui-6548d56557-b6f49 2/2 Running 9 (11h ago) 2d11h 69.96.2.93 ip-10-0-94-54 <none> <none>
kube-system kube-apiserver-ip-10-0-191-144 1/1 Running 11 (11h ago) 2d14h 10.0.191.144 ip-10-0-191-144 <none> <none>
kube-system kube-controller-manager-ip-10-0-191-144 1/1 Running 11 (11h ago) 2d14h 10.0.191.144 ip-10-0-191-144 <none> <none>
kube-system kube-proxy-7gc2j 1/1 Running 4 (11h ago) 2d14h 10.0.94.54 ip-10-0-94-54 <none> <none>
kube-system kube-proxy-xt8nc 1/1 Running 4 (11h ago) 2d14h 10.0.191.144 ip-10-0-191-144 <none> <none>
kube-system kube-proxy-xzjjj 1/1 Running 4 (11h ago) 2d14h 10.0.38.41 ip-10-0-38-41 <none> <none>
kube-system kube-scheduler-ip-10-0-191-144 1/1 Running 11 (11h ago) 2d14h 10.0.191.144 ip-10-0-191-144 <none> <none>
questions.
- Even though
nginx-ingress-controller
showsRunning
. It doesn’t get an external ip & is stuck on<pending>
. I’m not sure why!! (you can see above in service logs) - From any node’s ip, I’m able to access the nginx-svc by doing
curl -H 'Host: isitayush.dev' -X 10.0.191.144:31565
but how can I make it such that i can simply reach to my nginx-svc via any of the node’s external ip? (i.e. 10.0.191.144 [my master node ip] should route me to thenginx-svc
ornginx-pod
)
Can I switch
nginx-ingress-controller
port31565
to80
&443
or some other port such that I can use the node external ip and still reach to my defined service within the rule?
- Is it also possible to reach the
nginx-pod
by the domain nameabcd.com
or better yet something different such aspqrs.com
externally? Given I ownpqrs.com
and it has an A record that points to the external load balancer ip or one of the node’s ip fornginx-ingress-controller
?
My final goal would be to set an A record on multiple domains that I own to a external load balancer than balances between the ip’s of all nodes or just of the
nignx-ingress-controller
service and then havenginx-ingress-controller
route them to the appropriate service through theingress
rules. Is it possible to do so? I’ll hopefully deploy this on AWS (I want to try to avoid their load balancer service in this setup if possible and instead use a dedicated machine to do so).
- Can I make “dhcp domain” & “cluster dns” domain seperate. For example, in my interface settings, I could set
abcd.com
and when creating a cluster I’ll setpqrs.com
?
Thank you!
Cluster information:
Kubernetes version: v1.29.3
Cloud being used: (put bare-metal if not on a public cloud): bare-metal (virtual-machine-manager)
Installation method: kubeadm init --config cluster-config.yaml
Host OS: Linux host 6.1.0-18-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.76-1 (2024-02-01) x86_64 GNU/Linux
CNI and version: cilium v1.15.1
CRI and version: containerd containerd.io 1.6.28 ae07eda36dd25f8a1b98dfbf587313b99c0190bb