Alright, here are the raw show notes!
Office Hours March 2021
EU Edition
Panelists
- David McKay - Equinix Metal
- Pierre Humberdroz, Spectrm
- Chris Carty, Google Cloud
- Chauncey Thorn, Phase2
- Dave Strebel, Microsoft
- Jorge Castro, Unusual.VC
- Puja Abbassi, Giant Swarm
- Mario Loria, Carta
- Ayrat Khayretdinov, Google Cloud
- Rachel Leekin, VMware
Questions
meauses:
my question(s) is/are; how does everyone stay up to date on kubernetes or other work topics, how much time do you spend a week, during office hours? (pun intended) and how do you combine it with your family
https://popcast-d9f7b6dc.simplecast.com/
Haroune Mohammedi:
Hello, we are having a hard time running k8s jobs that use a 30 GB image. The image takes up to 50 minutes to be pulled on a newly created node. Reducing the image size is not an option and the pull time is bound by decompression, not download, so local registries won’t help. We use autoscaling to allocate resources on demand but this is irritating because running a job that takes 5 minutes will take almost one hour if the node is newly created. Also, having a node up with a prepulled image waiting for jobs is very expensive for us because the nodes has GPUs. Any way to deal with that? Thanks in advance
- GitHub - senthilrch/kube-fledged: A kubernetes add-on for creating and managing a cache of container images directly on the cluster worker nodes, so application pods start almost instantly
- GitHub - dragonflyoss/Dragonfly: Dragonfly is an intelligent P2P based image and file distribution system. - used by Alibaba’s to name few
- The Single-use Daemonset Pattern and Pre-pulling Images in Kubernetes - Codefresh
- Persistent Volumes | Kubernetes
General concensus seems to be to put the data in a volume so you mount that when the nodes come up instead of copying it over to every new node.
-
Person: igstan
-
Question: My question relates to one of the things that I noticed in the setup of our clusters, namely that each application has its own namespace, named after the application itself.
There’s no way to get all pods across several namespaces in a single command
For each app I have in mind I need to type out its particular namespace
I can’t use kubectl config set-context --current --namespace to alleviate some of the typing -
Answer: ? (k get pods --all-namespaces or -A)
-
Links:
:13 in the video
GitHub - derailed/k9s: 🐶 Kubernetes CLI To Manage Your Clusters In Style! (for a more interactive workflow)
Rachel: I’ve seen customers namespace by team, this helps them organize.
Walid: OpenShift uses projects, kubectx is a must
Mario: I start with default namespaces and then grow from there, on small, single purpose clusters this might (or might not, depends) remove complexity for the end user consuming the process. It’s not wrong to keep it simple. David: yeah, depends on who is consuming your cluster.
Archy: I like using gitops for this, helps keep everything seperated.
Puja: As kelsey said, kubectl is the new ssh, we’re trying to move towards automated processes
Chauncey: I recommend Octant if you want your developers to see a picture of what the cluster looks like: https://octant.dev
Chris: Argo for me! https://argoproj.github.io/
Puja: We like GitHub - derailed/k9s: 🐶 Kubernetes CLI To Manage Your Clusters In Style! and Mirantis has lens: Lens | Mirantis and also check out: GitHub - sbstp/kubie: A more powerful alternative to kubectx and kubens
Yogi: VSCode + K8s
Person: iG
Q: Does Kubernetes update process from 1.13 to 1.20 have to go through all versions? Any good documentation for that?
P: andeep
I’m at google at a big client and I’m constantly getting the question “how will GKE scale at their limits” what’s the bottleneck? etcd? masters? scheduler?
-
Using Dataplane V2 | Kubernetes Engine Documentation | Google Cloud
-
Person: JinhuaLuo
-
Question: I create a pod and then shutdown the node which runs that pod, the “kubectl get pods” shows its status is “running”, why? Shouldn’t it be dead? And sometimes it would change status to “terminating”, and keep that status.
-
Answer: ? How many nodes does the cluster have?
-
Links:
:35 (I’ll like each of these to the vid)
- Person: Ravikanth
- Question: My kubernetes cluster is deploying the pod in the node which is not in my cluster. Warning FailedScheduling default-scheduler node “ip-10-0-.ec2.internal” not found in cache the node is not in the cluster but still it is picking the node which is not in the cluster for deployment
- Answer: ?
- Links:
:38