I’m trying to expose an Kubernetes service (service type as Loadbalancer)
having a small pod deployed to single master node cluster in AWS.
One Master and 2 Worker nodes.
Deployment manifest
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web
name: web
namespace: test
spec:
replicas: 4
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- image: httpd:2.4.39-alpine
imagePullPolicy: IfNotPresent
name: httpd
ports:
- containerPort: 80
protocol: TCP
Service manifest
apiVersion: v1
kind: Service
metadata:
labels:
app: web
name: web
namespace: test
spec:
ports:
- name: "80"
nodePort: 30252
port: 80
protocol: TCP
targetPort: 80
selector:
app: web
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: xxxxxxxxxxx.us-west-2.elb.amazonaws.com
When I issue xxxxxxxxxxx.us-west-2.elb.amazonaws.com on the browser, the page isn’t loading.
Error giving :
ERR_EMPTY_RESPONSE
I tried to access using below ways as well in master node as well as worker nodes using shell.
curl :30252
curl xxxxxxxxxxx.us-west-2.elb.amazonaws.com:80
curl xxxxxxxxxxx.us-west-2.elb.amazonaws.com:30252
curl clusterIP:80
curl clusterIP:30252
none of are working and not giving any response.
So I tried
localhost:30252
by logging to the respective worker node and it is working.
I’m new to Kubernetes and appreciate if anyone can let me know why I can’t access using either nodeport or using the LoadBalancer host name.
I think the request is reaching the Kubernetes service but it’s not going beyond.
Thanks in advanced.
Cluster information:
Kubernetes version: v1.19.2
Cloud being used: AWS
Installation method:
Host OS: Ubuntu 18.04.5 LTS