Verifying Helm‐deployed ECK Elasticsearch for drift before upgrade

Description
I originally installed ECK Elasticsearch via a local Helm Chart. Since then, someone with cluster privileges may have edited resources directly using kubectl edit, causing the live objects to drift from what Helm expects. Now, before running helm upgrade, I want to confirm that the live configuration still matches my Chart (and, if not, reconcile any differences) to avoid upgrade failures or unintended overrides.
I’d really appreciate any tips on how to accurately compare and sync my Helm Chart with what’s actually running in the cluster—thank you in advance!


What I’ve tried so far

  • Ran helm template eck-elasticsearch ./eck-elasticsearch --namespace eck-elasticsearch --skip-tests to render the local manifests.
  • Fetched the live CR with kubectl get elasticsearch eck-elasticsearch -n eck-elasticsearch -o yaml and compared manually.
  • Used diff -u live-manifest.yaml new-manifest.yaml, but only saw trivial metadata or blank‐line differences.
  • Considered using helm diff upgrade …, but since manual edits via kubectl edit aren’t recorded by Helm, helm diff won’t detect those changes. Also looked at kubectl diff, but need guidance for ECK CRDs and Operator‐managed resources.

Cluster information

  • Kubernetes version: 1.24.6
  • Cloud/on-premises: On-premises
  • Installation method: Kubespray
  • Host OS: Ubuntu 22.04
  • CNI plugin: Calico
  • CRI: containerd
  • Helm version: v3.9.4
  • ECK operator version: 2.6.1
  • Number of nodes / node types: 3 master nodes, 5 worker nodes
  • StorageClass used by ECK: es-warm-data, es-hot-data