Postgresql.conf not found when using volume mount

I have the following deployment:

1   apiVersion: apps/v1
  1 kind: Deployment
  2 metadata:
  3   name: postgres-deployment
  4 spec:
  5   replicas: 1
  6   selector:
  7     matchLabels:
  8       component: postgres
  9   template:
 10     metadata:
 11       labels:
 12         component: postgres
 13     spec:
 14       volumes:
 15         - name: postgres-storage
 16           persistentVolumeClaim:
 17             claimName: postgres-persistent-volume-claim
 18       containers:
 19         - name: postgres
 20           image: prikshet/postgres
 21           ports:
 22             - containerPort: 5432
 23           volumeMounts:
 24             - name: postgres-storage
 25               mountPath: /var/lib/postgresql/data
 26               subPath: postgres

And when I include the lines 23-26 and do kubectl apply, the pod gives an error and doesn’t run but when I remove the lines 23-36 the pod runs. I want to create a volume mount with the lines 23-26. The error when checking the logs of this pod is:

postgres: could not access the server configuration file "/var/lib/postgresql/data/postgresql.conf": No such file or directory

postgres persistent volume claim is:

1   apiVersion: v1
  1 kind: PersistentVolumeClaim
  2 metadata:
  3   name: postgres-persistent-volume-claim
  4 spec:
  5   accessModes:
  6     - ReadWriteOnce
  7   resources:
  8     requests:
  9       storage: 10Gi

How to fix this?

The problem is that when you mount that volume claim, it’s essentially taking the place of the content at the mountPath.

[protosam@nullhost]$ docker run --rm -it prikshet/postgres sh
$ ls -lah /var/lib/postgresql/data/postgresql.conf
-rw------- 1 postgres postgres 28K Aug  9 10:50 /var/lib/postgresql/data/postgresql.conf

Your options are:

  • use an init container to ensure the file is populated in the volume claim
  • use a ConfigMap that’s mounted at that path in addition to the volume claim

I personally would use a configmap.

Instead of doing a config map, I simply copy this file to another directory and run postgres with the configuration in the changed directory using -c config_file:

1   FROM postgres:latest
  1 ARG VERSION
  2
  3 COPY deployment/postgres_init /docker-entrypoint-initdb.d
  4 USER postgres
  5 RUN initdb
  6 CMD postgres -c hba_file=/docker-entrypoint-initdb.d/pg_hba.conf -c config_file=/docker-entrypoint-initdb.d/postgresql.conf

But now I see this error:

prikshetsharma@Prikshets-MacBook-Pro humboi % kubectl logs -f postgres-deployment-64cbffd6d8-r5wtn
2021-08-10 05:42:14.072 GMT [7] LOG:  skipping missing configuration file "/var/lib/postgresql/data/postgresql.auto.conf"
2021-08-10 05:42:14.073 UTC [7] FATAL:  data directory "/var/lib/postgresql/data" has wrong ownership
2021-08-10 05:42:14.073 UTC [7] HINT:  The server must be started by the user that owns the data directory.

How to fix this?

It’s a permissions issue. The FATAL error explicitly tells you what directory the permission problem exists on and the HINT tells you how to fix it.

This is actually a pretty basic Linux issue and I highly recommend studying Linux from a comprehensive guide like this one, if you aren’t familiar with how things like permissions work in Linux. It’s pretty much a core dependency to be effective in Kubernetes too.

Protosam, the problem is that you cannot start postgres as root because postgres doesn’t allow that and that directory seems to be owned by root when it’s mounted but owned by the user when it’s not mounted. So it seems like a kubernetes problem since I don’t know how to change the permissions of a mounted directory. Of course I know how to chmod a directory, but that doesn’t seem to apply to a mounted directory. Does that make sense?

That’s not at all what I’m elluding to. I guess it might not be obvious exactly how to fix perms. Fix the perms with an init container.

Sorry btw, normally I’d drop a link to init containers but I’m on my phone right now.

This is what I’ve tried:

I still stand by init containers for this issue. You need to manipulate the data before the container with the actual data is running.

You might also be able to get some results by setting securityContext.fsGroup for the volume. It would still be owned by root though as the user and I don’t know of a way to change that other than using an init container to run a script to correct permissions.

Could you tell me what exactly the init container config would look like in this case? I put the following:

        initContainers:
          - name: change-user
            image: busybox:latest
            command: ['chown', '-R', 'postgres', '/var/lib/postgresql/data']

But it gives the error:

Init Containers:
  change-user:
    Container ID:  docker://4ed2a6e03e62d39d4dbefb3e22f45fd56dde4af0a152417bd44d8be84c4b83b3
    Image:         busybox:latest
    Image ID:      docker-pullable://busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60
    Port:          <none>
    Host Port:     <none>
    Command:
      chown
      -R
      999
      /var/lib/postgresql/data
    State:          Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 10 Aug 2021 21:16:19 -0700
      Finished:     Tue, 10 Aug 2021 21:16:19 -0700
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 10 Aug 2021 21:16:00 -0700
      Finished:     Tue, 10 Aug 2021 21:16:00 -0700
    Ready:          False
    Restart Count:  2
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-7qfdp (ro)

and therefore the postgres pod doesn’t start. The log of the initContainer is:

chown: /var/lib/postgresql/data: No such file or directory

Also why didn’t the securityContext.runAsUser: 999 work?

The container spec is going to be the same format as any other container. You need to assign volumeMounts to each container that should have volume mounts, including init containers.

For the command you can also do this. It’s good for when you need to do multiple things against your volume mount.

command:
  - sh
  - -c
  - |
    #!/bin/sh
    chown -R user:group /path/or/something
    echo also do whatever else you need done in this multi-line script thing

Closest thing I’ve done that you can kindof see for reference is probably my use of initContainers to prepare mariadb.