New to k8s - nfs trouble

I have been recently working on learning kubernetes. I have a home lab to play with. I have 3 vms. 1 master, 2 nodes.

I am trying to setup a few deployments with some things i already have running at home. Like pihole.

So one thing I have been fighting with the the persistance. I have a PV setup.

I have setup the following

kind: PersistentVolume
apiVersion: v1
metadata:
name: nfs-skooge
labels:
type: nfs
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs-dynamic
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /mnt/vol/dataset1/Servers/kube/pihole/
server: 192.168.2.52

When i check the state.
image

That is the volume. I would like to use this for my persistence on a bunch of different deployments.

So I have a pvc setup.

jbrunk@c1m1:~$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pihole-state Bound nfs-skooge 100Gi RWX nfs-dynamic 2d2h

Both the pv and pvc say RWX. However when I run my deployment, my container throws an error saying that the vol is READ ONLY.

The NFS server is a freenas box. I have other systems mounting to the same freenas box with no issues.

Any recommendations?

Thanks from the newbie!

Do you have a storageclass creating the persistent drive on the FreeNAS server?

It seems as if k8s is seeing things correctly, have you taken a look to see what the Permissions are for the Path on the FreeNAS server?

Here is my PV

kind: PersistentVolume
apiVersion: v1
metadata:
name: nfs-skooge
labels:
type: nfs
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs-dynamic
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /mnt/vol/dataset1/Servers/kube/pihole/
server: 192.168.2.52

That is my definition. What is strange is when i try to run the deployment it says the system is readonly but there is a file created in the correct location. So maybe, i have a mapping incorrect?

Here is my deployment file.

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: pihole
name: pihole
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: pihole
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: pihole
name: pihole
spec:
containers:
- image: diginc/pi-hole:latest
name: pihole
imagePullPolicy: Always
env:
- name: WEBPASSWORD
valueFrom:
secretKeyRef:
name: pihole-admin
key: adminpw
- name: ServerIP
valueFrom:
fieldRef:
fieldPath: status.podIP
ports:
- containerPort: 53
protocol: UDP
- containerPort: 80
protocol: TCP
volumeMounts:
- name: pihole-state
mountPath: /etc/pihole
- name: pihole-config
mountPath: /etc/dnsmasq.d/
subPath: 01-pihole.conf
- name: pihole-config
mountPath: /etc/lighttpd/
subPath: external.conf
livenessProbe:
tcpSocket:
port: 53
initialDelaySeconds: 60
periodSeconds: 30
readinessProbe:
tcpSocket:
port: 53
initialDelaySeconds: 30
periodSeconds: 10
volumes:
- name: pihole-state
persistentVolumeClaim:
claimName: pihole-state
- name: pihole-config
configMap:
name: pihole-config
restartPolicy: Always

hmmm that formatting didn’t stick (side note, is there a better way to show my configs so that the formatting stays?)

My log shows

jbrunk@c1m1:~$ kubectl logs pihole-545c6844b8-qgnv4
[s6-init] making user provided files available at /var/run/s6/etc
exited 0.
[s6-init] ensuring user provided files have correct perms
exited 0.
[fix-attrs.d] applying ownership & permissions fixes

[fix-attrs.d] 01-resolver-resolv: applying

[fix-attrs.d] 01-resolver-resolv: exited 0.
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts

[cont-init.d] 20-start.sh: executing

::: Starting docker specific setup for docker diginc/pi-hole

  • [[ piholerules == ‘’ ]]
  • pihole -a -p piholerules piholerules
    [✓] New password set
    cp: cannot create regular file ‘/etc/dnsmasq.d/01-pihole.conf’: Read-only file system
    [cont-init.d] 20-start.sh: exited 1.
    [cont-finish.d] executing container finish scripts

    [cont-finish.d] done.
    [s6-finish] syncing disks.
    [s6-finish] sending all processes the TERM signal.
    [s6-finish] sending all processes the KILL signal and exiting.

Ahhhh i may be an idiot
 pihole-config is a configmap.

I have the configmap there.

jbrunk@c1m1:~$ kubectl describe configmap
Name: pihole-config
Namespace: default
Labels:
Annotations:

Data

pihole-configmap.properties:

Events:

Is there something i need to change to allow that info to be stored in the config map? or since it’s trying to create a full file, do i need to change it so that the files get stored on the pv instead of the config map?

If you wrap your code in the ``` like in slack you’ll keep your preformatted text. Looking over you spec now to make sure I understand whats going on

1 Like

So you’re just trying to mount some files into the container?

If so you can load those files into a configmap like so kubectl create configmap <nameofconfigmap> --from-file <file1> --from-file <file2> . From there you can mount them directly into the path, here is a good resource, link

apiVersion: v1
kind: Pod
metadata:
  name: dapi-test-pod
spec:
  containers:
    - name: test-container
      image: k8s.gcr.io/busybox
      command: [ "/bin/sh", "-c", "ls /etc/config/" ]
      volumeMounts:
      - name: config-volume
        mountPath: /etc/config
  volumes:
    - name: config-volume
      configMap:
        # Provide the name of the ConfigMap containing the files you want
        # to add to the container
        name: special-config
  restartPolicy: Never

You may also want to try removing the subPath arg for the config files, that might be causing the issue as it creates a folder by that name. Volumes - Kubernetes

Let me know if that makes sense or if I misread the problem.

with the config-map. The container I am using (pihole) it looks like it creates the files on the first run. So they wouldn’t exist yet.

Is the config-map not updatable? Maybe in my case modifying it to just have a standard storage vol for the configs would make more sense instead of configmap?

Ya you have to delete the configmap to update it or do a weird trick. So you can try the volume and see how if that works.

I have made some progress. Got passed the read only thing, and actually got my container to run.

However, here is my current dilemma.

Here is the image I am using.
https://hub.docker.com/r/pihole/pihole/

If you look, it says you need to pass in localhost and a backup server to insert in to resolv.conf.

dns:
      - 127.0.0.1
      - 1.1.1.1

But, that’s where I have run in to issues. Apparently resolv.conf is auto configured by the pod. So I was trying to enable the “dnsPolicy” so I could override it. However, when i try to deploy now.

jbrunk@c1m1:~$ kubectl create -f pihole.yaml
error: error validating "pihole.yaml": error validating data: ValidationError(Deployment.spec.template.spec.containers[0]): unknown field "dnsPolicy" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false

It doesn’t know what dnsPolicy is.

If i change my apiVersion from extensions/v1beta1 to v1, It doesn’t understand the kind of “deployment”

jbrunk@c1m1:~$ kubectl create -f pihole.yaml
error: unable to recognize "pihole.yaml": no matches for kind "Deployment" in version "v1"

Any thoughts or suggestions?

are you using just v1 for the deployment version? If so try apps/v1

apps/v1 did the same :frowning:

Should i open this now as a different topic?

config map appears to be a read only volume. That was the volume throwing the error not NFS.

1 Like

It’s a silly typo, I think.

The DNS policy is not inside the containers field. It’s at the podSpec. See the reference API: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/#podspec-v1-core

See the other fields that are there, they should all have the same identation. So, the DNS policy thing goes at

Deployment.spec.template.spec.dnsPolicy

I feel I’m not explaining myself correctly. Let me know and I can try to clarify :slight_smile:

Hope this helps!

1 Like

Nope, you were right. I had that field in the wrong part. was eventually able to get it to take and now it works great!!! :slight_smile:

Awesome! :slight_smile:

volume becomes unmounted and gluster path changes?

? Not sure what do you refer to