Update configmap from pod

Hey, currently I have scheduled a pod on my Kubernetes cluster which has a go script and I am using client-go APIs to interact with the configmaps from inside my pod.
I have created Role and Rolebinding for that purpose.

Role.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: configmap-updater
rules:
- apiGroups: ["*"]
  resources: ["configmaps"]
  resourceNames: ["target-configmap"]
  verbs: ["get", "create", "update", "delete"]

RoleBinding.yaml

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: demo-rolebinding
  namespace: default
  labels:
    app: tools-rbac1
subjects:
- kind: Group
  name: system:service
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: configmap-updater
  apiGroup: ""

Currently I am able to GET the configmap but I am unable to UPDATE it.
So is it even possible to update configmaps from pods. If yes, then what part of it I am missing.

Cluster information:

Kubernetes version: v1.21.1
Host OS: RHEL 8.1

Have you tried adding the patch verb?

  verbs: ["get", "create", "update", "patch", "delete"]

After updating a role, if the changes are not working immediately, delete the pod that the service account is attached to and recreate it. Being that this is a security feature, I assume that changes to this should be transient, but while developing stuff, I’ve experienced what I believe to be non transient results. I’m unsure on this though as I’ve not had time to replicate what I observed yet.

On another note, I would recommend removing delete if you don’t plan on actually deleting the configmap via the application.

Hey @protosam, thank you for the help.
I tried what you suggested but it did not work out.
But I have solved the problem. I created a cluster rolebinding using kubectl command.

kubectl create clusterrolebinding default-admin --clusterrole=admin --serviceaccount=default:default
kubectl run --rm -i demo --image=myimage

Using this allowed me to update a configMap using client-go APIs even from inside a pod that is scheduled on my kubernetes cluster.

Interesting, is the configmap you’re modifying in a different namespace than the pod that’s trying to modify it?

No, the pod and the configmap both are in the same namespace in my case.