Single Node Kubernetes in Production


#1

Hi,

We have a requirement, wherein we need to run a subset of our services in an offline mode (when network is interrupted etc), so we are planning to run a bunch of services in a linux box.

I’m planning to run them as dockers which are managed by K8S.
Is it wise to install K8S on a single node or is it an overkill.

I would like to have a decent container orchestration - health checks, deployments, blue green, etc
If K8S is an overkill, can you please suggest some other orchestration tool that would come handy in such scenarios.

PS : I’m using K8s to deploy my cloud services, so would prefer to have some level of consistency.


#2

Hi,

We are currently running a NON-PROD environment using 1.11 and only 2 bare metal servers (80 cores + 512GB RAM) and I must admit it’s been a constant pain so far.

First challenge has been to overcome the default SLA of 110 pods per server, which we increased to 500 (flag in the kubelet) and started running into all sort of issues mainly related to performance.

By reading and reading and trying and failing we have tweaked lots of different flags in the Control Plane components, such as in-flight requests for the API Server and stuff like that, and also in the Linux OS (Oracle Linux 7.5 in our case), such as ulimits and the related.

Still we feel like flying blind as there is not too much support from the community (tried several times in Slack without any feedback), maybe because it’s just not a recommended use case, although I’m pretty sure this an happen in real enterprises, as my case.

Will be more than happy and grateful to get expert advice from the community.

Regards.


#3

Nice progress to start using/looking at K8s since I was working in Alicante :slight_smile:.

Based on my experience I would suggest to scale horizontally before reach the 500 pods for node as even the takeover in case of failure to others nodes would be quite painful.

Personally I found the most common bottleneck be on the networking components.

The IPVS (IPVS-Based In-Cluster Load Balancing Deep Dive - Kubernetes) helped me.

For the overlay I still didn’t have evaluate many options except (flannel and calico ) but if you run/plan to run on AWS I would keep an eye on the amazon-vpc-cni-k8s which seems to remove some of the road blocks and integrate with AWS networking.