Minimal restricted Kubernetes Cluster on Ubuntu in production

I also tried using this local storage ( and as it does not support dynamic provisioning, my PVC are also not being bound =/

kubectl get storageclass
NAME            PROVISIONER                    AGE
local-storage   5m24s

kubectl describe pvc esdata
Name:          esdata
Namespace:     default
Status:        Pending
Labels:        name=esdata
Annotations:   <none>
Finalizers:    []
Access Modes:  
VolumeMode:    Filesystem
  Type       Reason         Age               From                         Message
  ----       ------         ----              ----                         -------
  Normal     FailedBinding  5s (x7 over 78s)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set
Mounted By:  elasticsearch-deployment-56fbc8698-4dblq```

Happy to help. Hope one of those options works out for you. Lets us know how it goes.

I gave up this no-provider and decided to give StorageOS a try using Helm. I installed Helm 2.14.0-linux-amd64 and followed the documentation. I was having an error by running helm init && helm repo add storageos && helm repo update && helm install storageos/storageos --namespace storageos --set cluster.join=singlenode --set csi.enable=true:

Error: no available release name found

After more reading I realized kubeadm should enable RBAC by default, so I had to create a ServiceAccount for Tiller. After that the StorageOS chart seemed to have a syntax problem:

Error: validation failed: error validating “”: error validating data: [ValidationError(CustomResourceDefinition.status): missing required field “conditions” in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.CustomResourceDefinitionStatus, ValidationError(CustomResourceDefinition.status): missing required field “storedVersions” in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.CustomResourceDefinitionStatus]

I am trying to see if I am doing something dumb or I should go for another solution like Portworx.

Sidenote: I read much about StorageClasses and dynamic provisioning, thats why I went for a CSI driver which is now at 1.1.0 API version. In their official list StorageOS have 1.0.0 implementation while Portworx has an older (and less featured) 0.3.

I don’t have access to my dev cluster until tomorrow. I did try deploying somethings in Kind and I got different errors (assuming that’s due to some missing featureflags though).

If you’re having trouble it might be worth checking out one of the other solutions, to see if you get the same error or not. If it does, might be that there is a config error in the cluster.

I’ve created an issue in the storageos/chart project and they recommended using another chart (which is also a StorageOS CSI driver) storageos-operator. After installing it, the storageclass is set and PODs are running. BUT, I get the same problem as the no-provider! No dynamic provisioning of PVs… PVCs also has the events:

  Normal     FailedBinding  5s (x7 over 78s)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

Heading for 3 months trying to run this not-that-complex application on kubeadm. Despite learning a lot, I couldn’t see the light at the end of the tunnel.

Ya storage is definitely a tricky one. I’ve ran into lots of different issues with a different provider. I’ve seen that error though before when auto-provisioning, normally it’s a result of not setting a default storageclass which results things just hanging more or less.

Normally I confirm that by taking a look at the pvc and look at what storageclass it is trying to use.

If it’s the default storageclass issue, here’s the link to setting it.

Sorry to hear you’re bumping into so many issues.

Making StorageClass default made the provisioning possible! Now it seems I have permission problems with the PV on my ElasticSearch 6.2.4 pod. I found an almost identical situation in an unsolved issue with the same version.

My pod has the describe:

    kubectl describe pods elasticsearch-deployment-56fbc8698-qfm5g
    Name:               elasticsearch-deployment-56fbc8698-qfm5g
    Namespace:          default
    Priority:           0
    PriorityClassName:  <none>
    Node:               cherokee/
    Start Time:         Tue, 21 May 2019 20:54:22 -0300
    Labels:             name=elasticsearch
    Status:             Running
    Controlled By:      ReplicaSet/elasticsearch-deployment-56fbc8698
    Init Containers:
        Container ID:  docker://9f992a22bb5efcb2c4b0d5c1a3dd49530f384f5db4658aab72b00f772cf2edcb
        Image:         busybox
        Image ID:      docker-pullable://busybox@sha256:4b6ad3a68d34da29bf7c8ccb5d355ba8b4babcad1f99798204e7abb43e54ee3d
        Port:          <none>
        Host Port:     <none>
        State:          Terminated
          Reason:       Completed
          Exit Code:    0
          Started:      Tue, 21 May 2019 20:54:38 -0300
          Finished:     Tue, 21 May 2019 20:54:38 -0300
        Ready:          True
        Restart Count:  0
        Environment:    <none>
          /var/run/secrets/ from default-token-6k2dq (ro)
        Container ID:   docker://82a03a4ae70ab8ef4c4089ec1fa0faf865eca9a7a2878c7fd914e8676387a6b2
        Image ID:       docker-pullable://
        Port:           9200/TCP
        Host Port:      0/TCP
        State:          Waiting
          Reason:       CrashLoopBackOff
        Last State:     Terminated
          Reason:       Error
          Exit Code:    1
          Started:      Tue, 21 May 2019 21:00:53 -0300
          Finished:     Tue, 21 May 2019 21:00:55 -0300
        Ready:          False
        Restart Count:  6
          discovery.type:  single-node
          ES_JAVA_OPTS:    -Xms512m -Xmx512m
          /usr/share/elasticsearch/data from esdata (rw)
          /var/run/secrets/ from default-token-6k2dq (ro)
      Type              Status
      Initialized       True 
      Ready             False 
      ContainersReady   False 
      PodScheduled      True 
        Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:  esdata
        ReadOnly:   false
        Type:        Secret (a volume populated by a Secret)
        SecretName:  default-token-6k2dq
        Optional:    false
    QoS Class:       BestEffort
    Node-Selectors:  <none>
    Tolerations: for 300s
            for 300s
      Type     Reason                  Age                     From                     Message
      ----     ------                  ----                    ----                     -------
      Warning  FailedScheduling        8m43s (x7 over 9m15s)   default-scheduler        pod has unbound immediate PersistentVolumeClaims
      Normal   Scheduled               8m41s                   default-scheduler        Successfully assigned default/elasticsearch-deployment-56fbc8698-qfm5g to cherokee
      Normal   SuccessfulAttachVolume  8m41s                   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-ac536232-7c23-11e9-8fae-305a3a9efaf9"
      Normal   Pulled                  8m27s                   kubelet, cherokee        Container image "busybox" already present on machine
      Normal   Created                 8m25s                   kubelet, cherokee        Created container set-vm-max-map-count
      Normal   Started                 8m25s                   kubelet, cherokee        Started container set-vm-max-map-count
      Normal   Pulled                  6m42s (x5 over 8m24s)   kubelet, cherokee        Container image "" already present on machine
      Normal   Created                 6m40s (x5 over 8m22s)   kubelet, cherokee        Created container elasticsearch
      Normal   Started                 6m40s (x5 over 8m22s)   kubelet, cherokee        Started container elasticsearch
      Warning  BackOff                 3m17s (x23 over 8m14s)  kubelet, cherokee        Back-off restarting failed container

When I get the logs I see the same exception:

kubectl logs elasticsearch-deployment-56fbc8698-qfm5g
[2019-05-22T00:16:19,310][INFO ][o.e.n.Node               ] [] initializing ...
[2019-05-22T00:16:19,322][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: Failed to create node environment
	at org.elasticsearch.bootstrap.Elasticsearch.init( ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.bootstrap.Elasticsearch.execute( ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.cli.EnvironmentAwareCommand.execute( ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.cli.Command.mainWithoutErrorHandling( ~[elasticsearch-cli-6.2.4.jar:6.2.4]
	at org.elasticsearch.cli.Command.main( ~[elasticsearch-cli-6.2.4.jar:6.2.4]
	at org.elasticsearch.bootstrap.Elasticsearch.main( ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.bootstrap.Elasticsearch.main( ~[elasticsearch-6.2.4.jar:6.2.4]
Caused by: java.lang.IllegalStateException: Failed to create node environment
	at org.elasticsearch.node.Node.<init>( ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.node.Node.<init>( ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.bootstrap.Bootstrap$5.<init>( ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.bootstrap.Bootstrap.setup( ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.bootstrap.Bootstrap.init( ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.bootstrap.Elasticsearch.init( ~[elasticsearch-6.2.4.jar:6.2.4]
	... 6 more
Caused by: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/nodes
	at sun.nio.fs.UnixException.translateToIOException( ~[?:?]
	at sun.nio.fs.UnixException.rethrowAsIOException( ~[?:?]
	at sun.nio.fs.UnixException.rethrowAsIOException( ~[?:?]
	at sun.nio.fs.UnixFileSystemProvider.createDirectory( ~[?:?]
	at java.nio.file.Files.createDirectory( ~[?:1.8.0_161]
	at java.nio.file.Files.createAndCheckIsDirectory( ~[?:1.8.0_161]
	at java.nio.file.Files.createDirectories( ~[?:1.8.0_161]
	at org.elasticsearch.env.NodeEnvironment.<init>( ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.node.Node.<init>( ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.node.Node.<init>( ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.bootstrap.Bootstrap$5.<init>( ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.bootstrap.Bootstrap.setup( ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.bootstrap.Bootstrap.init( ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.bootstrap.Elasticsearch.init( ~[elasticsearch-6.2.4.jar:6.2.4]
	... 6 more

The first idea it came to me is to try a newer version, so I tried 6.8.0 lauched this week and same problem. It seems to be a recurrent problem, I see the issue all over the internet. I also found an Elastic member response, but came to no conclusion about a suitable solution.

Glad the default setting got it working :slight_smile:

Bummer that elastic is giving you a run around.

Are you building the elastic search service by hand or deploying via Helm? I’ve had success installing it with previous Helm Charts.

Yes, I am doing manifest files by hand from a previously working docker-compose using Kompose (as mentioned in this post months ago :rofl:). On Elastic’s blog they just released an all-in-one operator for the Elastic Stack.

Kompose converted my Elastic in one Service, PersistentVolumeClaim and Deployment, it is a also a single node ES. If I am not mistaken, they are using StatefulSets instead of Deployments in their Helm charts. Now I have to study about StatefulSets to understand what I need to do =p

Here is my ES manifests:

apiVersion: v1
kind: PersistentVolumeClaim
    name: esdata
  name: esdata
  - ReadWriteOnce
      storage: 100Mi
status: {}
apiVersion: v1
kind: Service
    name: elasticsearch
  name: elasticsearch
  - port: 9200
    targetPort: 9200
    name: elasticsearch
  loadBalancer: {}
apiVersion: apps/v1
kind: Deployment
  name: elasticsearch-deployment
    name: elasticsearch
  replicas: 1
      name: elasticsearch
    type: Recreate
        name: elasticsearch
      - name: set-vm-max-map-count
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ['sysctl', '-w', 'vm.max_map_count=262144']
          privileged: true
      - env:
        - name: discovery.type
          value: single-node
        - name: ES_JAVA_OPTS
          value: -Xms512m -Xmx512m
        - name:
          value: log
        name: elasticsearch
        - containerPort: 9200
        resources: {}
        - mountPath: /usr/share/elasticsearch/data
          name: esdata
      restartPolicy: Always
      - name: esdata
          claimName: esdata
status: {}

haha oh boy forgot about the Kompose conversion, my bad :slight_smile:

Going with the Operator is probably a good choice, haven’t looked into theirs yet but the Operator pattern is great for this kind of thing. Statefulsets makes sense for that as well.

I realized I was trying to make a complex application run in my cluster all at once. I will try to make the databases work with my running docker-compose and the rest of the services as PODs in the single node. After everything is working I mess with this StatefulSets XD

The problem is my cluster and localhost aren’t in the same network, I can’t curl localhost:9200 from inside a POD. I tried to enable this setting /etc/ssh/sshd_config GatewayPorts to yes, but it didn’t work. Now I am trying to figure out a way for my PODs access host node localhost. Maybe there is a configuration in Calico I have to make.

Edit: My ifconfig shows a network adapter with IP I can access it via curl from the PODs. I will try this approach further.