How to ensure NFS PV is mounted with correct uid/gid permissions?

Cluster information:

Kubernetes version: Server Version: version.Info{Major:“1”, Minor:“27”, GitVersion:“v1.27.4”, GitCommit:“fa3d7990104d7c1f16943a67f11b154b71f6a132”, GitTreeState:“clean”, BuildDate:“2023-07-19T12:14:49Z”, GoVersion:“go1.20.6”, Compiler:“gc”, Platform:“linux/amd64”}
Cloud being used: Bare Metal (talos.dev)
Installation method: talos.dev
Host OS: Talos
Tag: v1.4.7
SHA: a1ee7612
Built:
Go version: go1.20.6
OS/Arch: linux/amd64
Enabled: RBAC

I’m planning to expose NFS exposed via my Synology to my kubernetes cluster, but running into permission issues. My expectation is that setting the pod’s fsGroup would set the mountpoint to that group value at least, but instead I’m seeing the volume mounted as root and I will not be able to access the volume with a process running as non-root (which is a requirement):

bash-5.2$ ls -ltah | grep vol
d---------    1 root     root          20 Aug 22 11:37 vol

Here is my test deploy/pvc/pv:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-test
  namespace: nfs-test
spec:
  selector:
    matchLabels:
      app: nfs-test
  replicas: 1
  strategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: nfs-test
    spec:
      volumes:
        - name: nfs-test
          persistentVolumeClaim:
            claimName: nfs-test
      containers:
        - name: bash
          image: bash
          command: ["tail", "-f", "/dev/null"]
          volumeMounts:
            - name: nfs-test
              mountPath: /vol
              subPath: ''
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop: ['ALL']
      securityContext:
        fsGroup: 1001
        supplementalGroups: [1001]
        runAsUser: 1001
        runAsNonRoot: true
        seccompProfile:
          type: RuntimeDefault
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-test
  labels:
    app: nfs-test
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /volume1/kube-vols
    server: <server_host>
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-test
spec:
  selector:
    matchLabels:
      app: nfs-test
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

How should I correctly ensure that these privileges match my specification? Related, what is the community’s advice on NFS squash settings?