I have a local cabinets cluster up and running on my home server. It consists of 1 master and 3 worker nodes. The all run inside KVM virtual machines running Ubuntu Server 19.10. I deployed 2 pods, 2 services and 2 ingress definitions. The problem is that the pods are accessible by url only on the nodes on which they are deployed. I would like them to be deployed only on some nodes, but accessible from all nodes. What is the right way to do this? I found out that an option is to use a daemon set but that way there would be 1 pod per node which I would rather not have. What are some other options?
kind: Pod
apiVersion: v1
metadata:
name: apple-app
labels:
app: apple
spec:
containers:
- name: apple-app
image: hashicorp/http-echo
args:
- "-text=apple"
---
kind: Service
apiVersion: v1
metadata:
name: apple-service
spec:
selector:
app: apple
ports:
- port: 5678 # Default port for image
---
kind: Pod
apiVersion: v1
metadata:
name: banana-app
labels:
app: banana
spec:
containers:
- name: banana-app
image: hashicorp/http-echo
args:
- "-text=banana"
---
kind: Service
apiVersion: v1
metadata:
name: banana-service
spec:
selector:
app: banana
ports:
- port: 5678 # Default port for image
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /apple
backend:
serviceName: apple-service
servicePort: 5678
- path: /banana
backend:
serviceName: banana-service
servicePort: 5678
update: it was flannel’s fault. The ansible script that I used to bootstrap k8s cluster had wrong cidr set.