Got VolumeFailedDelete on ReclaimPolicy: Delete vanilla microk8s

I mostly just follow the MicroK8s - MicroK8s documentation - home, got something weird when dealing with NFS.
I try this on 2 nodes cluster, nfs just the same with MicroK8s - Use NFS for Persistent Volumes, when creating pvc using that specificied on that docs, its fine, deleting is also fine (the pv got deleted, I lookup on the /srv/nfs folder its also deleted), but when I try to deploy some statefulset, namely helm chart from bitnami/mysql, when I delete the pvc, the PV status is Released and with error

here is the full output of the pv

Name:            pvc-1ba70aef-e523-4f7f-ba90-f2ecbdc0fcc7
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: nfs.csi.k8s.io
                 volume.kubernetes.io/provisioner-deletion-secret-name: 
                 volume.kubernetes.io/provisioner-deletion-secret-namespace: 
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    nfs-csi
Status:          Released
Claim:           development/data-my-release-mysql-secondary-0
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        8Gi
Node Affinity:   <none>
Message:         
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            nfs.csi.k8s.io
    FSType:            
    VolumeHandle:      10.9.247.13#srv/nfs#pvc-1ba70aef-e523-4f7f-ba90-f2ecbdc0fcc7#
    ReadOnly:          false
    VolumeAttributes:      csi.storage.k8s.io/pv/name=pvc-1ba70aef-e523-4f7f-ba90-f2ecbdc0fcc7
                           csi.storage.k8s.io/pvc/name=data-my-release-mysql-secondary-0
                           csi.storage.k8s.io/pvc/namespace=development
                           server=10.9.247.13
                           share=/srv/nfs
                           storage.kubernetes.io/csiProvisionerIdentity=1682747870755-8081-nfs.csi.k8s.io
                           subdir=pvc-1ba70aef-e523-4f7f-ba90-f2ecbdc0fcc7
Events:
  Type     Reason              Age                 From                                                                  Message
  ----     ------              ----                ----                                                                  -------
  Warning  VolumeFailedDelete  39s (x7 over 103s)  nfs.csi.k8s.io_unpad-k8s-node-0_f45ebdf3-b4e7-44d3-ab97-c05ed320423d  rpc error: code = Internal desc = failed to delete subdirectory: unlinkat /tmp/pvc-1ba70aef-e523-4f7f-ba90-f2ecbdc0fcc7/pvc-1ba70aef-e523-4f7f-ba90-f2ecbdc0fcc7/data/undo_002: permission denied

I suspect its because the permission ?

anyway, that’s the problem.
here is some resource:

OS

root@unpad-k8s-node-0:/home/iqbal# lsb_release -a
No LSB modules are available.
Distributor ID:	Debian
Description:	Debian GNU/Linux 11 (bullseye)
Release:	11
Codename:	bullseye

nfs csi storage class

Name:            nfs-csi
IsDefaultClass:  No
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"nfs-csi"},"mountOptions":["hard","nfsvers=4.1"],"parameters":{"server":"10.9.247.13","share":"/srv/nfs"},"provisioner":"nfs.csi.k8s.io","reclaimPolicy":"Delete","volumeBindingMode":"Immediate"}

Provisioner:           nfs.csi.k8s.io
Parameters:            server=10.9.247.13,share=/srv/nfs
AllowVolumeExpansion:  <unset>
MountOptions:
  hard
  nfsvers=4.1
ReclaimPolicy:      Delete
VolumeBindingMode:  Immediate
Events:             <none>

microk8s status

microk8s is running
high-availability: no
  datastore master nodes: 10.9.247.13:19001
  datastore standby nodes: none
addons:
  enabled:
    dns                  # (core) CoreDNS
    ha-cluster           # (core) Configure high availability on the current node
    helm                 # (core) Helm - the package manager for Kubernetes
    helm3                # (core) Helm 3 - the package manager for Kubernetes
    ingress              # (core) Ingress controller for external access
    metrics-server       # (core) K8s Metrics Server for API access to service metrics
  disabled:
    cert-manager         # (core) Cloud native certificate management
    community            # (core) The community addons repository
    dashboard            # (core) The Kubernetes dashboard
    gpu                  # (core) Automatic enablement of Nvidia CUDA
    host-access          # (core) Allow Pods connecting to Host services smoothly
    hostpath-storage     # (core) Storage class; allocates storage from host directory
    kube-ovn             # (core) An advanced network fabric for Kubernetes
    mayastor             # (core) OpenEBS MayaStor
    metallb              # (core) Loadbalancer for your Kubernetes cluster
    minio                # (core) MinIO object storage
    observability        # (core) A lightweight observability stack for logs, traces and metrics
    prometheus           # (core) Prometheus operator for monitoring and logging
    rbac                 # (core) Role-Based Access Control for authorisation
    registry             # (core) Private image registry exposed on localhost:32000
    storage              # (core) Alias to hostpath-storage add-on, deprecated

/etc/exports

[iqbal@unpad-k8s-node-0:~]$ cat /etc/exports
/srv/nfs 10.9.247.0/24(rw,sync,no_subtree_check)

nfs server

● nfs-server.service - NFS server and services
     Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor preset: enabled)
     Active: active (exited) since Sat 2023-04-29 12:57:29 WIB; 1h 48min ago
    Process: 124449 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
    Process: 124450 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
   Main PID: 124450 (code=exited, status=0/SUCCESS)
        CPU: 8ms

Apr 29 12:57:28 unpad-k8s-node-0 systemd[1]: Starting NFS server and services...
Apr 29 12:57:29 unpad-k8s-node-0 systemd[1]: Finished NFS server and services.

(i think i can’t attach inspection file here, so let me know what file you need)

Ok I dunno if this is a fix, but now the folder on nfs server is deleted successfully, changes on /etc/exports

from

/srv/nfs 10.9.247.0/24(rw,no_subtree_check,sync)

to

/srv/nfs 10.9.247.0/24(rw,no_root_squash,no_subtree_check,sync)

adding no_root_squash