Where to define the arch?


#1

Hi,
I’ve installed k8s on arm before and now reinstalled it with k3s.

But the coredns and helm pods dont deploy because they are in turn waiting for the pause image to come up.
The architecture is not defined in the manifest is the root cause.

My uname -m reports aarch64 , but the registry has ARM64 support, so the naming of this architecture is not uniform. I’ve been able to find the coredns.yml and edit the image to point to coredns-arm64, but I couldn’t find the pause and helm yml yet. Also I think this should be a global variable, hard coding isn’t a real solution imo.

Can I define my master and nodes architecture somewhere?

Cheers,
Merijn


#2

Stumbled upon it, only now. Let me try this.

https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector


#3

It’s already set at arm64.

Here is some output. If someone can point me in the right direction, please.

Here’s my output and error.

https://paste.ubuntu.com/p/49FvHNsGKG/


#4

It could be a bug in k3s, it is pretty new. Have you tried opening an issue over at the github page. They are pretty responsive and eager to fix things.


#5

I have not used other arch, but finding the right image seems the best. Until you find them (or something is fixed so they are used), maybe you can do the following:

The image is quite simple and you can build it yourself: https://github.com/kubernetes/kubernetes/tree/master/build/pause

If you can change the repository it points to, then problem solved. You point it to your own and build it yourself.

If you can’t, you might be able to do this trick (not sure, but I really think it may work):

  • Build the image and push it to a repo

  • Pull the image from ALL the nodes

  • Run docker tag to tag that image with exactly the same repo and version kubernetes is trying to pull

This way, as the image is there (if the pull policy is not always) it will use the right one. I THINK.

But this is just a workaround. You may want to look for the definitive fix (not sure if it’s a bug in k3s, some configuration issue, etc.)