Helm won't create PV and PVC for Prometheus


#1

I am trying to create a Prometheus Helm chart with PersistentVolumes in ebs. However for some reason it only creates pv and pvc for alertmanager and ignores the same for prometheus.

My configs looks like:
alertmanager-pvc.yaml

{{- if not .Values.alertmanager.statefulSet.enabled -}}
{{- if and .Values.alertmanager.enabled .Values.alertmanager.persistentVolume.enabled -}}
{{- if not .Values.alertmanager.persistentVolume.existingClaim -}}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  {{- if .Values.alertmanager.persistentVolume.annotations }}
  annotations:
{{ toYaml .Values.alertmanager.persistentVolume.annotations | indent 4 }}
  {{- end }}
  labels:
    {{- include "prometheus.alertmanager.labels" . | nindent 4 }}
  name: {{ template "prometheus.alertmanager.fullname" . }}
spec:
  accessModes:
{{ toYaml .Values.alertmanager.persistentVolume.accessModes | indent 4 }}
{{- if .Values.alertmanager.persistentVolume.storageClass }}
{{- if (eq "aws" .Values.alertmanager.persistentVolume.storageClass) }}
  storageClassName: "gp2"
{{- else }}
  storageClassName: "{{ .Values.alertmanager.persistentVolume.storageClass }}"
{{- end }}
{{- end }}
  resources:
    requests:
      storage: "{{ .Values.alertmanager.persistentVolume.size }}"
{{- end -}}
{{- end -}}
{{- end -}}

alertmanager-pv.yaml

{{- if not .Values.alertmanager.statefulSet.enabled -}}
{{- if and .Values.alertmanager.enabled .Values.alertmanager.persistentVolume.enabled -}}
apiVersion: v1
kind: PersistentVolume
metadata:
  {{- if .Values.alertmanager.persistentVolume.annotations }}
  annotations:
{{ toYaml .Values.alertmanager.persistentVolume.annotations | indent 4 }}
  {{- end }}
  labels:
    {{- include "prometheus.alertmanager.labels" . | nindent 4 }}
  name: {{ template "prometheus.alertmanager.fullname" . }}
spec:
  capacity:
    storage: "{{ .Values.alertmanager.persistentVolume.size }}"
  PersistentVolumeReclaimPolicy: "{{ .Values.alertmanager.persistentVolume.ReclaimPolicy }}"
  accessModes:
{{ toYaml .Values.alertmanager.persistentVolume.accessModes | indent 4 }}
{{- if .Values.alertmanager.persistentVolume.storageClass }}
{{- if (eq "aws" .Values.alertmanager.persistentVolume.storageClass) }}
  storageClassName: "gp2"
  awsElasticBlockStore:
    fsType: "ext4"
    volumeID: "{{ .Values.alertmanager.persistentVolume.volumeID }}"
{{- if (eq "nfs" .Values.alertmanager.persistentVolume.storageClass) }}
  StorageClassName: "nfs"
    server: "{{ .Values.alertmanager.persistentVolume.nfs.server }}
    mountOptions:
      {{- range .Values.alertmanager.persistentVolume.nfs.options }}
      - {{ . }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end -}}
{{- end -}}

server-pv.yaml

{{- if not .Values.server.statefulSet.enabled -}}
{{- if and .Values.server.enabled .Values.server.persistentVolume.enabled -}}
apiVersion: v1
kind: PersistentVolume
metadata:
  {{- if .Values.server.persistentVolume.annotations }}
  annotations:
{{ toYaml .Values.server.persistentVolume.annotations | indent 4 }}
  {{- end }}
  labels:  
    {{- include "prometheus.server.labels" . | nindent 4 }}
  name: {{ template "prometheus.server.fullname" . }}
spec:
  capacity:
    storage: "{{ .Values.server.persistentVolume.size }}"
  PersistentVolumeReclaimPolicy: "{{ .Values.server.persistentVolume.ReclaimPolicy }}"
  accessModes:
{{ toYaml .Values.server.persistentVolume.accessModes | indent 4 }}
{{- if .Values.server.persistentVolume.storageClass }}
{{- if (eq "aws" .Values.server.persistentVolume.storageClass) }}
  storageClassName: "gp2"
  awsElasticBlockStore:
    fsType: "ext4"
    volumeID: "{{ .Values.server.persistentVolume.volumeID }}"
{{- if (eq "nfs" .Values.server.persistentVolume.storageClass) }}
  StorageClassName: "nfs"
    server: "{{ .Values.server.persistentVolume.nfs.server }}
    mountOptions:
      {{- range .Values.server.persistentVolume.nfs.options }}
      - {{ . }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end -}}
{{- end -}}

server-pvc.yaml

{{- if not .Values.server.statefulSet.enabled -}}
{{- if and .Values.server.enabled .Values.server.persistentVolume.enabled -}}
{{- if not .Values.server.persistentVolume.existingClaim -}}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  {{- if .Values.server.persistentVolume.annotations }}
  annotations:
{{ toYaml .Values.server.persistentVolume.annotations | indent 4 }}
  {{- end }}
  labels:
    {{- include "prometheus.server.labels" . | nindent 4 }}
  name: {{ template "prometheus.server.fullname" . }}
spec:
  accessModes:
{{ toYaml .Values.server.persistentVolume.accessModes | indent 4 }}
{{- if .Values.server.persistentVolume.storageClass }}
{{- if (eq "aws" .Values.server.persistentVolume.storageClass) }}
  storageClassName: "gp2"
{{- else }}
  storageClassName: "{{ .Values.server.persistentVolume.storageClass }}"
{{- end }}
{{- end }}
  resources:
    requests:
      storage: "{{ .Values.server.persistentVolume.size }}"
{{- end -}}
{{- end -}}
{{- end -}}

Pod describe for Prometheus pod server says:

Events:
Type Reason Age From Message


Warning FailedScheduling 31s (x2 over 31s) default-scheduler persistentvolumeclaim “prometheus-prometheus” not found


#2

If you take a look at the PVCs in that namespace what is the status of the ‘prometheus-prometheus’ claim?


#3

It doesn’t create it. That’s the thing:

kubectl get pv -n monitoring
NAME                      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                STORAGECLASS   REASON   AGE
prometheus-alertmanager   4Gi        RWO            Retain           Bound    monitoring/prometheus-alertmanager   gp2                     32s

kubectl get pvc -n monitoring
NAME                      STATUS   VOLUME                    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
prometheus-alertmanager   Bound    prometheus-alertmanager   4Gi        RWO            gp2            39s

kubectl get pods --all-namespaces
monitoring    prometheus-alertmanager-596b7b5c5-fnbhv                      1/2     Running   0          8s
monitoring    prometheus-kube-state-metrics-6f577ff78f-7bf5w               1/1     Running   0          8s
monitoring    prometheus-node-exporter-hj5g9                               1/1     Running   0          8s
monitoring    prometheus-prometheus-6c56b547b7-6fgsc                       0/2     Pending   0          7s
monitoring    prometheus-pushgateway-7d6fd78f7d-mrs58                      0/1     Running   0          7s

kubectl describe pod prometheus-prometheus-6c56b547b7-6fgsc -n monitoring
Events:
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  Warning  FailedScheduling  4m13s (x2 over 4m13s)  default-scheduler  persistentvolumeclaim "prometheus-prometheus" not found

#4

I am assuming helm install isn’t giving any errors?

You can run helm template mychart to produce the templated YAML to see what the yaml is being templated correctly.


#5

Good point. Both server-pv.yaml and server-pvc.yaml are not rendered (content is not on the list), but both files are listed on sources list. What could be a reason of that?


#6

In the past I’ve had typo’s that cause some of the templates to be ignored or included where they should not be. If you can see what templates are being rendered using the helm template command, you might catch where the PVs being missed are. If they are just not being rendered at all there may be something in the if statements that is failing.


#7

Both alertmanager and prometheus have the same ifs, as it is a copy and paste with changes like:
.Values.alertmanager.persistentVolume.enabled
to
.Values.server.persistentVolume.enabled


#8

Ya they look good, just might be something in the values file that’s causing it to be skipped.