Hi everyone!
Situation: obvious format error in an audit configuration file.
Result: the apiserver somehow proceeded generating audit events after restart.
Questions:
what configuration did it use?
how to check the currently active audit policy?
is there any helpful logs to determine that there was an error in audit policy file format?
kube-apiserver pods are static pods. Containers inside them are not restarted if you just delete these pods, so kubectl delete pod is not enough to catch a new audit policy
if you want to deploy a new audit policy - just kill the api-server process on master nodes (one by one with sufficient pause to not disrupt cluster functions)
you can find errors regarding policy format in logs of kube-apiserver containers after restart
there is no API endpoint to check currently active policy. At least, for now.