Best approach to restart pods when ConfigMap is updated via operator

Hi, I’m runnnig minikube cluster with a custom operator for a domain-specific application. This application requires some of its important configs to be kept in a .toml file. I have been successfully able to load these toml configs using a .yaml field via a Go-based operator created using operator-sdk.

The application requires a restart whenever a change is made to the .toml config. (I have been successful upto the point of getting the configs updated on the volume-mounted config).

My question is, since my application requires a restart everytime the configs are changed, what would be best approach? I have tried several.

  1. Create a manual logic to remount the config file and try to restart the deployment manually. (Remounting part is successful, but restarting of deployment is not yet successful)
  2. Use the GitHub - stakater/Reloader: A Kubernetes controller to watch changes in ConfigMap and Secrets and do rolling upgrades on Pods with their associated Deployment, StatefulSet, DaemonSet and DeploymentConfig – [✩Star] if you're using it! (Not succesful - didn’t work for me as advertised, I just used the annotation. Maybe i’m doing something wrong because I’m still learning)
  3. Using env variables to load the config instead of volume mounting (I think this is not suitable for my application because all the pods need shared access to the config .toml)

I highly appreciate any advice/suggestions. Thanks.

UPDATE :
I was able to get approach 2 working as needed. Have to find out whether that would be an overhead or not though. Still, ideas are welcome. Thanks.

Restart Deployment On Secret Change | Kyverno