Upgrade Multi Master with 0 downtime

Hi there,

Currently, I am working on a topology where I have 3 masters and 2 workers (of course, I can add workers but it’s for testing).
I installed my packages in a specific version and I would like to put the new version on a master and that all my machines (2 masters and 2 worker) upgrade too. All this without my pods cutting.
So that one of my workers updates himself, that the pods all pass over it so that the second one in turn updates himself

The problem is that in my config.yaml that I use to define my master, I say that the controlplaneEndPoint is my master1.

So following the doc, I make my small orders but once on the master 2, it tells me that everything is up to date while kubelet, kubectl and kubeadm are in version 1.12.0 and I said I wanted version 1.13.0…

Here, you can see my “config.yaml”:
image