I installed kubernetes on virtual machine servers.(to test it before installing bare-metal) and created a cluster that has 1 master and 3 worker nodes. I installed calico default configuration as a network plugin.
After reading the k8s documentation, I thought that I can expose to internal network using a dns name using pod → service → ingress.
I installed ingress-nginx and I am using this as an ingress class. I created deployment, service and ingress using following yaml content.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app-container
image: testcontainers/helloworld
ports:
- containerPort: 8099
--------
apiVersion: v1
kind: Service
metadata:
name: my-app-service
namespace: my-app
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8099
type: ClusterIP
-------
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app-ingress
namespace: my-app
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: k8stest.myorganization.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app-service
port:
number: 80
After this configuration I thought that i can simply access the service using dns name k8stest.myogranization.com. (dns is defined in the internal dns server in my organization)
After reading some content bare-metal configuration requires loadbalancer and i used metallb loadbalancer using following
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default
namespace: metallb-system
spec:
addresses:
- node1Ipaddress -node3IpAddress
autoAssign: true
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system
spec:
ipAddressPools:
- default
I just want to create a dns record, expose the containers using this dns record in my local network. I can ping k8stest.myorganization.com.
I could not find the reason why I cannot access the service. What is the missing part? How can I trace the errors?
What is the guide to find root causes of this kind of problems?
Thanks in advance.
Ihsan
Cluster information:
1 master and 3 worker nodes
NAME STATUS ROLES AGE VERSION
k8s-master01-staging Ready control-plane 83d v1.31.1
k8s-worker-01-staging Ready 83d v1.31.1
k8s-worker-02-staging Ready 83d v1.31.1
k8s-worker-03-staging Ready 83d v1.31.1
Kubernetes version:
Client Version: v1.31.1
Kustomize Version: v5.4.2
Server Version: v1.31.0
Cloud being used: (put bare-metal if not on a public cloud)
Installation method: using documentation.
Host OS: ubuntu 24.04
CNI and version:
CRI and version: