I also tried using this local storage (kubernetes.io/no-provisioner) and as it does not support dynamic provisioning, my PVC are also not being bound =/
kubectl get storageclass
NAME PROVISIONER AGE
local-storage kubernetes.io/no-provisioner 5m24s
kubectl describe pvc esdata
Name: esdata
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: name=esdata
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 5s (x7 over 78s) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Mounted By: elasticsearch-deployment-56fbc8698-4dblq```
I gave up this no-provider and decided to give StorageOS a try using Helm. I installed Helm 2.14.0-linux-amd64 and followed the documentation. I was having an error by running helm init && helm repo add storageos https://charts.storageos.com && helm repo update && helm install storageos/storageos --namespace storageos --set cluster.join=singlenode --set csi.enable=true:
Error: no available release name found
After more reading I realized kubeadm should enable RBAC by default, so I had to create a ServiceAccount for Tiller. After that the StorageOS chart seemed to have a syntax problem:
Error: validation failed: error validating āā: error validating data: [ValidationError(CustomResourceDefinition.status): missing required field āconditionsā in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.CustomResourceDefinitionStatus, ValidationError(CustomResourceDefinition.status): missing required field āstoredVersionsā in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.CustomResourceDefinitionStatus]
I am trying to see if I am doing something dumb or I should go for another solution like Portworx.
Sidenote: I read much about StorageClasses and dynamic provisioning, thats why I went for a CSI driver which is now at 1.1.0 API version. In their official list StorageOS have 1.0.0 implementation while Portworx has an older (and less featured) 0.3.
I donāt have access to my dev cluster until tomorrow. I did try deploying somethings in Kind and I got different errors (assuming thatās due to some missing featureflags though).
If youāre having trouble it might be worth checking out one of the other solutions, to see if you get the same error or not. If it does, might be that there is a config error in the cluster.
Iāve created an issue in the storageos/chart project and they recommended using another chart (which is also a StorageOS CSI driver) storageos-operator. After installing it, the storageclass is set and PODs are running. BUT, I get the same problem as the no-provider! No dynamic provisioning of PVsā¦ PVCs also has the events:
Normal FailedBinding 5s (x7 over 78s) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Heading for 3 months trying to run this not-that-complex application on kubeadm. Despite learning a lot, I couldnāt see the light at the end of the tunnel.
Ya storage is definitely a tricky one. Iāve ran into lots of different issues with a different provider. Iāve seen that error though before when auto-provisioning, normally itās a result of not setting a default storageclass which results things just hanging more or less.
Normally I confirm that by taking a look at the pvc and look at what storageclass it is trying to use.
If itās the default storageclass issue, hereās the link to setting it.
Sorry to hear youāre bumping into so many issues.
Making StorageClass default made the provisioning possible! Now it seems I have permission problems with the PV on my ElasticSearch 6.2.4 pod. I found an almost identical situation in an unsolved issue with the same version.
My pod has the describe:
kubectl describe pods elasticsearch-deployment-56fbc8698-qfm5g
Name: elasticsearch-deployment-56fbc8698-qfm5g
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: cherokee/150.164.7.70
Start Time: Tue, 21 May 2019 20:54:22 -0300
Labels: name=elasticsearch
pod-template-hash=56fbc8698
Annotations: cni.projectcalico.org/podIP: 192.168.0.220/32
Status: Running
IP: 192.168.0.220
Controlled By: ReplicaSet/elasticsearch-deployment-56fbc8698
Init Containers:
set-vm-max-map-count:
Container ID: docker://9f992a22bb5efcb2c4b0d5c1a3dd49530f384f5db4658aab72b00f772cf2edcb
Image: busybox
Image ID: docker-pullable://busybox@sha256:4b6ad3a68d34da29bf7c8ccb5d355ba8b4babcad1f99798204e7abb43e54ee3d
Port: <none>
Host Port: <none>
Command:
sysctl
-w
vm.max_map_count=262144
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 21 May 2019 20:54:38 -0300
Finished: Tue, 21 May 2019 20:54:38 -0300
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-6k2dq (ro)
Containers:
elasticsearch:
Container ID: docker://82a03a4ae70ab8ef4c4089ec1fa0faf865eca9a7a2878c7fd914e8676387a6b2
Image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4
Image ID: docker-pullable://docker.elastic.co/elasticsearch/elasticsearch-oss@sha256:2d9c774c536bd1f64abc4993ebc96a2344404d780cbeb81a8b3b4c3807550e57
Port: 9200/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 21 May 2019 21:00:53 -0300
Finished: Tue, 21 May 2019 21:00:55 -0300
Ready: False
Restart Count: 6
Environment:
discovery.type: single-node
ES_JAVA_OPTS: -Xms512m -Xmx512m
cluster.name: log
Mounts:
/usr/share/elasticsearch/data from esdata (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-6k2dq (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
esdata:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: esdata
ReadOnly: false
default-token-6k2dq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-6k2dq
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 8m43s (x7 over 9m15s) default-scheduler pod has unbound immediate PersistentVolumeClaims
Normal Scheduled 8m41s default-scheduler Successfully assigned default/elasticsearch-deployment-56fbc8698-qfm5g to cherokee
Normal SuccessfulAttachVolume 8m41s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-ac536232-7c23-11e9-8fae-305a3a9efaf9"
Normal Pulled 8m27s kubelet, cherokee Container image "busybox" already present on machine
Normal Created 8m25s kubelet, cherokee Created container set-vm-max-map-count
Normal Started 8m25s kubelet, cherokee Started container set-vm-max-map-count
Normal Pulled 6m42s (x5 over 8m24s) kubelet, cherokee Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4" already present on machine
Normal Created 6m40s (x5 over 8m22s) kubelet, cherokee Created container elasticsearch
Normal Started 6m40s (x5 over 8m22s) kubelet, cherokee Started container elasticsearch
Warning BackOff 3m17s (x23 over 8m14s) kubelet, cherokee Back-off restarting failed container
When I get the logs I see the same exception:
kubectl logs elasticsearch-deployment-56fbc8698-qfm5g
[2019-05-22T00:16:19,310][INFO ][o.e.n.Node ] [] initializing ...
[2019-05-22T00:16:19,322][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: Failed to create node environment
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:125) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-6.2.4.jar:6.2.4]
at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) ~[elasticsearch-6.2.4.jar:6.2.4]
Caused by: java.lang.IllegalStateException: Failed to create node environment
at org.elasticsearch.node.Node.<init>(Node.java:267) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.node.Node.<init>(Node.java:246) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:213) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:213) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-6.2.4.jar:6.2.4]
... 6 more
Caused by: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/nodes
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84) ~[?:?]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:?]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:?]
at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384) ~[?:?]
at java.nio.file.Files.createDirectory(Files.java:674) ~[?:1.8.0_161]
at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781) ~[?:1.8.0_161]
at java.nio.file.Files.createDirectories(Files.java:767) ~[?:1.8.0_161]
at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:204) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.node.Node.<init>(Node.java:264) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.node.Node.<init>(Node.java:246) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:213) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:213) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-6.2.4.jar:6.2.4]
... 6 more
The first idea it came to me is to try a newer version, so I tried 6.8.0 lauched this week and same problem. It seems to be a recurrent problem, I see the issue all over the internet. I also found an Elastic member response, but came to no conclusion about a suitable solution.
Yes, I am doing manifest files by hand from a previously working docker-compose using Kompose (as mentioned in this post months ago ). On Elasticās blog they just released an all-in-one operator for the Elastic Stack.
Kompose converted my Elastic in one Service, PersistentVolumeClaim and Deployment, it is a also a single node ES. If I am not mistaken, they are using StatefulSets instead of Deployments in their Helm charts. Now I have to study about StatefulSets to understand what I need to do =p
haha oh boy forgot about the Kompose conversion, my bad
Going with the Operator is probably a good choice, havenāt looked into theirs yet but the Operator pattern is great for this kind of thing. Statefulsets makes sense for that as well.
I realized I was trying to make a complex application run in my cluster all at once. I will try to make the databases work with my running docker-compose and the rest of the services as PODs in the single node. After everything is working I mess with this StatefulSets XD
The problem is my cluster and localhost arenāt in the same network, I canāt curl localhost:9200 from inside a POD. I tried to enable this setting /etc/ssh/sshd_config GatewayPorts to yes, but it didnāt work. Now I am trying to figure out a way for my PODs access host node localhost. Maybe there is a configuration in Calico I have to make.
Edit: My ifconfig shows a network adapter with IP 172.20.0.1. I can access it via curl from the PODs. I will try this approach further.