I deployed a logstash by statefulset kind with 3 replicas in k8s. Using filebeat to send data to it.
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: logstash-nginx
spec:
serviceName: "logstash"
selector:
matchLabels:
app: logstash
updateStrategy:
type: RollingUpdate
replicas: 3
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:7.10.0
resources:
limits:
memory: 2Gi
ports:
- containerPort: 5044
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
command: ["/bin/sh","-c"]
args:
- bin/logstash -f /usr/share/logstash/pipeline/logstash.conf;
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
Logstash’s service
---
apiVersion: v1
kind: Service
metadata:
labels:
app: logstash
name: logstash
spec:
ports:
- name: "5044"
port: 5044
targetPort: 5044
selector:
app: logstash
Filebeat’s daemonset configmap
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
...
output.logstash:
hosts: ["logstash.default.svc.cluster.local:5044"]
loadbalance: true
bulk_max_size: 1024
When run real data. Most data went to the second logstash’s pod. Sometimes data also can go to the first and the third pods but very little occurs.
Is this the kubernetes usage issue? How to set evenly traffic to these 3 pods by k8s? Is it good to use internal load balancer?(Run on GKE)
apiVersion: v1
kind: Service
metadata:
name: logstash
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app: logstash
spec:
type: LoadBalancer
selector:
app: logstash
ports:
- name: "5044"
port: 5044
targetPort: 5044