New to k8s, have turned up a cluster and wishing to migrate a grip of container in
I am running NFS for persistence from the controller - that works fine
but i am have persistent, CRIPPLING issues with permissions
containers want to build a directory structure, chown & chmod
but they are getting stopped by permissions constantly
I have tried fsgroup, runasuser, runasgroup (0, 1000 [actual owner])
that helps sometimes, the most effective tool is chmodding open Other to 7 recursively - over and over until all directories are created - obv this is pathetic
I am not running helm etc, just kubectl with a gob of aliases and hand-written manifests
I don’t have a specific manifest to share, I’m just looking for a best practice guide
and just can’t find anything - inb4 security concerns
NFS:
server using nfs-kernel-server; clients using nfs-common (ubuntu packages)
(server) /mnt/nfs (this is not a mounted dev, just the sda1)
owned by: nobody:nogroup / myself:myself
permissions: 777 (work now, security later)
/etc/exports: /mnt/nfs/ 10.0.0.0/8(rw,no_subtree_check)
client mounted with /etc/fstab in personal directory, fstab options: auto,nofail,noatime,nolock,tcp,actimeo=1800,_netdev
Manifests
I am using kind: statefulset for all of them - this may be an issue… please let me know
The reason is I can get all the yml i need in one file and writing 3+ docs for a one pod is heathen
per above the NFS (client side) is mounted in my user directory (1000)
Using apiversion apps/v1, loadbalancer, default namespace
I have tried setting up volumes with volume-type nfs (in the manifest), but switched to hostpath, pointing to the NFS mount location - I know this is janky… i know
The above has given me the best results - metube and one other work without further tinkering
But practically any other I try will not have it (or anything else i’ve tried)
Portainer, zoneminder, plex, netbox, homeassistant
All of these work flawlessly in docker but faceplant spectacularly in my experience in my k8s cluster
Example Logs
This is portainer - example permissions issue, in case you have never seen one:
**** Permissions could not be set. This is probably because your volume mounts are remote or read-only. ****
**** The app may not work properly and we will not provide support for it. ****
chown: changing ownership of ‘/config/Library’: Operation not permitted
The Ask
Not looking to troubleshoot one issue, just had to make a ton of assumptions about how this is done, could not find a walkthrough that really covered all the bases
Expecting the request, I’ve pasted an example manifest below - again looking for some best practices that will generally resolve these permissions issues
Cluster information:
Kubernetes version: 1.29.6
Cloud being used: Proxmox (virtualized)(local)(in the other room)
Installation method: apt package
Host OS: Ubuntu 24.04, same issue on 22.[something]
CNI and version: flannel 0.25.4
CRI and version:
<sorry ctrl+shift+c brings ui magic tomes>
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: portainer
namespace: default
spec:
serviceName: portainer
replicas: 1
selector:
matchLabels:
service: portainer
template:
metadata:
labels:
service: portainer
spec:
securityContext:
fsGroup: 0
runAsUser: 0
runAsGroup: 0
containers:
- name: portainer
image: portainer/portainer-ce:latest
ports:
- containerPort: 8000
name: portainer-port
- containerPort: 9443
name: portainer-443
volumeMounts:
- name: portainer-sock
mountPath: /var/run/docker.sock
- name: portainer-data
mountPath: /data
volumes:
- name: portainer-sock
hostPath:
path: /home/nimn/nfs/portainer/sock
- name: portainer-data
hostPath:
path: /home/nimn/nfs/portainer/data
apiVersion: v1
kind: Service
metadata:
name: portainer
namespace: default
spec:
type: LoadBalancer
selector:
service: portainer
ports:
- port: 8000
targetPort: 8000
nodePort: 30087
name: portainer-port
- port: 9443
targetPort: 9443
nodePort: 30447
name: portainer-443