I have a working production environment on a Ubuntu 18.04 machine with an application running with docker-compose (dedicated 8gb RAM and a Intel i7-4790). As an intermediate step for going cloud, I am migrating it to Kubernetes with Kompose. For the development environment, I am using minikube which is not intended for production.
I would like to give a step ahead of using minikube in production, for further cloud deployment, but I only have this machine to start with. What would you recommend in my case?
I started to try your suggestion… read the documentation but I am stuck with the decision of which CNI to use since Minikube has a default one and Kubeadm doesn’t. I saw flannel, calico and weave net are popular choices and my knowledge about network is a bit poor.
What would you use for my on-premisses situation? Will be compatible with cloud providers (AWS, GCP, Azure) in the future as I go to the cloud?
Very nice @macintoshprime, I’ve read a lot about Calico and decided it was a good choice. Since then, I had a hard time with it.
With Kubeadm 1.13.4 (latest on Ubuntu repositories), I’ve tried Calico 3.5, after the node is Ready, I use kubectl to start deployments and all my pods get stuck at Pending status. Then I updated to 3.6 (latest) and I couldn’t even get the node Ready.
If you take a look at the Pod’s (kubectl describe pods <podname>) that are in stuck in the Pending state, what are the events showing? If nothing interesting is there you may want to check at the deployment level.
All of the Pending one’s have this warning message:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 36s (x14 over 9m35s) default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
And the one with status Init:0/2 have the following describe (kubectl describe pods calico-node-rb8ht -n kube-system):
Normal Scheduled 9m11s default-scheduler Successfully assigned kube-system/calico-node-rb8ht to cherokee
Normal Pulled 9m8s kubelet, cherokee Container image “calico/cni:v3.6.1” already present on machine
Normal Created 9m7s kubelet, cherokee Created container
Normal Started 9m6s kubelet, cherokee Started container
I am new to k8s and couldn’t find any useful info from those outputs.
Ahh it’s a single node cluster. Try removing the master node taint, kubectl taint nodes --all node-role.kubernetes.io/master-. That should allow pods to be scheduled on it and bring Calico up.
Thanks a lot @macintoshprime… that definitely got me one step further, but now I get another strange warning in my mongodb pod:
Name: mongo-5d89cc6f7f-t7cph
Namespace: default
Priority: 0
PriorityClassName:
Node:
Labels: name=mongo
pod-template-hash=5d89cc6f7f
Annotations:
Status: Pending
IP:
Controlled By: ReplicaSet/mongo-5d89cc6f7f
Containers:
mongo:
Image: mongo
Port: 27017/TCP
Host Port: 0/TCP
Environment:
Mounts:
/data/db from mongo-claim0 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-8bngw (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
mongo-claim0:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mongo-claim0
ReadOnly: false
default-token-8bngw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-8bngw
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
Warning FailedScheduling 3m59s (x2 over 3m59s) default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn’t tolerate.
Warning FailedScheduling 29s (x7 over 3m38s) default-scheduler pod has unbound immediate PersistentVolumeClaims
I will now try to see if I find something wrong in the manifests generated by kompose.
So mongo is trying to provision some storage via the PVC and my guess is there is no Storage Class to handle the provisioning. Are you using the Helm Chart or did you build it on your own?
Then it just created a Deployment, Service and a PVC from it. The documentation didn’t tell me a needed to create a PersistentVolume as in Configure a Pod to Use a PersistentVolume for Storage. There is also this Claims As Volumes, I am tryng to understand if it is a replacement for a PV, but I think this is what kompose did.
What is even MORE strange, I that I used kompose convert in an almost identical Elasticsearch service and it is Running in my single-node without problems.
Trying to figure out the magic in this PVC-PV bindings…
Ya so looked at what you provided from the docker-compose.yaml and my guess would be that kompose tried to handle the following line by creating a PVC and mounting at /data/db
Very clarifying video and docs @macintoshprime, thanks again. Now I see there is no StorageClass and PersistentVolumes in my single-node, so the PVCs are in Pending state.
I saw hostPath is not recommended if I am going for a cloud provider latter (AWS, Google) because it is not “Workload portability friendly”. Should I use local PersistentVolume or there is an out-of-tree recommended plugin for storage in this case?
@macintoshprime there is no way my StorageClasses work on a single-node kubeadm. I tried using AWS-EBS storageclass, but it doesn’t work locally without the actual use of an account. I also tried 3 different approaches for StorageClass:
After creating the StorageClass I try creating 2 PVCs, but they do not get binded and no PV is created. I also tried the class used in Minikube (which works in my development environment):
Again, no PVs, no bind. I though it could be some taint related issue, but I’ve already removed all the taint from master using kubectl taint nodes --all node-role.kubernetes.io/master-. Any idea?
Just as a general FYI - the storage classes you tried:
provisioner: docker.io/hostpath - Does not exist in vanilla deployments. It is only found in docker managed instances.
provisioner: k8s.io/minikube-hostpath is only available with minikube.
kubernetes.io/host-path - IIRC it has been superceded by local volumes.
If you need something that is just tied to a single host, the local volume route might not be a bad option. It will prevent migration etc, but should be okay for a single node instance.
It seems to be a problem with all these provisioners at @mrbobbytables said. Is this a Kubernetes class I need to deploy? Is there a way to get minikube’s provisioner for example? I also found out there is need to enable the kubernetes.io/host-path which is disabled by default. I did not find any announcement of documentation about it being superceded by local volumes…
I got this:
Name: esdata
Namespace: default
StorageClass: standard
Status: Pending
Volume:
Labels: name=esdata
Annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/host-path
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 36s (x6 over 2m5s) persistentvolume-controller Failed to create provisioner: Provisioning in volume plugin "kubernetes.io/host-path" is disabled
Mounted By: elasticsearch-deployment-56fbc8698-6p4jg```
The local volumes option is a good one if your just testing things out but if you need auto-provisioning then using something else would probably be best.
Again storageOS, portworx are good options as they have helm charts you can use to quickly spin things up. OpenEBS just got inducted into the CNCF as an incubating project, it’s worth looking at as well https://openebs.io/