Kube-proxy iptables save/restore impact on large clusters

or large clusters (>= 500 nodes) that involve periodic update of iptables with custom rules. would like to know what would be the impact of kube-proxy on iptables, specially that kube-proxy triggers iptables entire table update (save/restore) on every nodes for every pods/endpoints actions.

what would be the impact on iptables locks and on the custom rules updates (including cases when kube-proxy save/restore occur at the same time as the custom rule updates )?

Also impact on the nodes performance overall?