Kubeadm - Everything you wanted to know but were afraid to ask.?.?


#1

So slack is a terrible venue to find solutions to common issues that folks run into, so I would like to give this a whirl. Fire away with some questions and we’ll see if we can answer them.


#2

I’ll ask the first question: when is multi-master support slated to become available? I have gone through the previous PRs and checklists, where plans later changed, so it’s particularly clear (to me at least) what the current status is.


#3

We’ve factored out actionable details into a set of work items that we are executing on - https://github.com/kubernetes/community/blob/master/keps/sig-cluster-lifecycle/draft-20180130-kubeadm-join-master.md is the simple workflow for master join, but it will be dependent on config changes. For an overview please see the KubeCon talk:


#4

I have a question too: Some platforms (for example CoreOS) do not come with some tools Kubernetes requires: CoreOS doesn’t ship with a GlusterFS client and it doesn’t seem feasible to install one (as kubelet requires a mount helper for GlusterFS and /sbin is read-only). Therefore, running kubelet in a container with all the necessary tools seems to be a valid approach. This is how CoreOS’ Tectonic handles it as well.

Does kubeadm allow some kind of containerized kubelet setup, similar to what CoreOS’ built-in kubelet-wrapper does? I don’t think it’s a problem to deploy kubelet in a container (paired with a different systemd service), but I am curious if this is considered supported in any way by kubeadm and was unable to find information on it.


#5

I have an issue I just recently came across, this might be a better forum than Slack.

I’m working on spinning up and ARM64 based cluster but the 1.10.x version of kubeadm seems to have issues brining up the etcd cluster.

The symptoms I am seeing are described in this issue. Basically the etcd container is coming up but crash loops after awhile. The current “fix” is to use kubeadm 1.9.7 to bring up a 1.9.7 cluster. There seems to be some issues around the liveness probe failing, which apparently was changed in the 1.10 release?

Is this a known issue? How can I work around this? Would love to get this project up and running on 1.10.


#6

Question: How can I get kubeadm to pull images from a specific repo?

It currently pulls images from k8s.gcr.io. Is there a way to tell it to pull images from my Nexus Docker repo which mirrors k8s.gcr.io?


#7

It does, but it’s not an upstream artifact and I’m not certain who maintains it now.


#8

Question: What is the correct way of handling the rotation of certificates in a HA Cluster?

The context is that I am setting up a HA Cluster using kubeadm. I am following these tutorials: Set up a Highly Availabile etcd Cluster With kubeadm - Kubernetes and Creating Highly Available Clusters with kubeadm - Kubernetes.
I noticed that the generated certificates are only valid for 1 year and ideally these would be automatically rotated when they are about to expire. I found that kubeadm init has support for rotating its certificates and also has an option for using an external CA (kubeadm init - Kubernetes).
However, during the setup of the etcd cluster there is no such option. I could add --rotate-certificates to the kubelet, but will this actually rotate the certificates for etcd (since they aren’t being used by kubelet)? Any other suggestions for handling the certificate rotation are of course welcome.

As an additional question which is not as important, is it possible to get the kubelet or etcd containers to generate the required certificates based on an external CA? If possible this would be pretty convenient.

Thanks in advance!


#9

I agree 100%

Slack is the most useless support tool I’ve ever seen. My entire team is in full agreement.

Our biggest frustration with trying to adopt kubernetes is the terrible documentation and little support mediums.

Slack makes me miss pulling down usenet feeds over a 300bps modem in the 1990’s.