Kibana returns : Kibana did not load properly. Check the server output for more information

Thank you @balchua1, I did exactly you commands but it still fails. I suppose it’s a Virtual Machine issue, but it’s the same for all hypervisors I use. Do you have recommendations on network settings for the host which bears microk8s ?
I join the screen captures of what I did and the result. If you see a mistake on my side let me know !!
Thanks a lot

sysadmin@loire:~$ sudo snap install microk8s --channel 1.19/stable --classic
[sudo] password for sysadmin:
microk8s (1.19/stable) v1.19.2 from Canonical✓ installed
sysadmin@loire:~$ microk8s enable fluentd
Insufficient permissions to access MicroK8s.
You can either try again with sudo or add the user sysadmin to the 'microk8s' group:

    sudo usermod -a -G microk8s sysadmin
    sudo chown -f -R sysadmin ~/.kube

The new group will be available on the user's next login.
sysadmin@loire:~$ sudo microk8s enable fluentd
Enabling Fluentd-Elasticsearch
Labeling nodes
node/loire labeled
Enabling DNS
Applying manifest
serviceaccount/coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
clusterrole.rbac.authorization.k8s.io/coredns created
clusterrolebinding.rbac.authorization.k8s.io/coredns created
Restarting kubelet
DNS is enabled
service/elasticsearch-logging created
serviceaccount/elasticsearch-logging created
clusterrole.rbac.authorization.k8s.io/elasticsearch-logging created
clusterrolebinding.rbac.authorization.k8s.io/elasticsearch-logging created
statefulset.apps/elasticsearch-logging created
configmap/fluentd-es-config-v0.2.0 created
serviceaccount/fluentd-es created
clusterrole.rbac.authorization.k8s.io/fluentd-es created
clusterrolebinding.rbac.authorization.k8s.io/fluentd-es created
daemonset.apps/fluentd-es-v3.0.2 created
deployment.apps/kibana-logging created
service/kibana-logging created
Fluentd-Elasticsearch is enabled
sysadmin@loire:~$ sudo microk8s kubectl proxy
Starting to serve on 127.0.0.1:8001

By the way this command gives an astonishing result:

sysadmin@loire:~$ microk8s status --wait-ready
microk8s is running
high-availability: no
  datastore master nodes: 127.0.0.1:19001
  datastore standby nodes: none
addons:
  enabled:
    dns                  # CoreDNS
    fluentd              # Elasticsearch-Fluentd-Kibana logging and monitoring
    ha-cluster           # Configure high availability on the current node
  disabled:
    ambassador           # Ambassador API Gateway and Ingress
    cilium               # SDN, fast with full network policy
    dashboard            # The Kubernetes dashboard
    gpu                  # Automatic enablement of Nvidia CUDA
    helm                 # Helm 2 - the package manager for Kubernetes
    helm3                # Helm 3 - Kubernetes package manager
    host-access          # Allow Pods connecting to Host services smoothly
    ingress              # Ingress controller for external access
    istio                # Core Istio service mesh services
    jaeger               # Kubernetes Jaeger operator with its simple config
    knative              # The Knative framework on Kubernetes.
    kubeflow             # Kubeflow for easy ML deployments
    linkerd              # Linkerd is a service mesh for Kubernetes and other frameworks
    metallb              # Loadbalancer for your Kubernetes cluster
    metrics-server       # K8s Metrics Server for API access to service metrics
    multus               # Multus CNI enables attaching multiple network interfaces to pods
    prometheus           # Prometheus operator for monitoring and logging
    rbac                 # Role-Based Access Control for authorisation
    registry             # Private image registry exposed on localhost:32000
    storage              # Storage class; allocates storage from host directory

How can your install work without RBAC enabled?
I am confused

First i will try to set the user part of the microk8s group.

sudo usermod -a -G microk8s sysadmin
sudo chown -f -R sysadmin ~/.kube

I dont really use kubectl proxy command, instead i use kubectl -n kube-system port-forward svc/kibana localport:serviceport

Then access it http://localhost:localport/

It doesn’t require rbac to be enabled. The default setting in microk8s is always allow.

Making sysadmin member of microk8s group doesn’t solve the issue.
I did:

sysadmin@loire:~$ kubectl -n kube-system port-forward svc/kibana-logging 8001:5601
Forwarding from 127.0.0.1:8001 -> 5601
Forwarding from [::1]:8001 -> 5601

And
http://localhost:5601/ gives the result below.
Logs are not Kibana. It’s the general index page.

If one requests the link given in microk8s documentation:

http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kibana-logging/proxy/app/kibana

the result is

{"statusCode":404,"error":"Not Found","message":"Not Found"}

Hi did you try to click “Explore my own” right at kibana main page? Then setup the index you want to see.

I did a fresh install on Hyper-V and it’s really a node taint/toleration issue.
Elasticsearch is not scheduled because the network is unreachable.
I suspect a Calico incompatibility with the Hyper-V network configuration.
Is there anybody who found the way to configure the network so that Es can reach it ?
Can microk8s work in a VM?

  elasticsearch-logging-token-6nzp7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  elasticsearch-logging-token-6nzp7
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s

I confirm that

  • 1.18 works perfect on Hyper-V,
  • 1.19 doesn’t work : IO unreachable

What does kubectl get no -o wide shows?

Can you also paste the calico node logs?

kubectl get no -o wide gives:

NAME     STATUS   ROLES    AGE   VERSION                     INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
danube   Ready    <none>   11m   v1.19.2-34+1b3fa60b402c1c   192.168.42.214   <none>        Ubuntu 18.04.5 LTS   4.15.0-118-generic   containerd://1.3.7

Calico logs:

sysadmin@danube:~$ kubectl -n kube-system logs calico-kube-controllers-847c8c99d-76ggb
2020-10-06 15:28:58.610 [INFO][1] main.go 88: Loaded configuration from environment config=&config.Config{LogLevel:"info", ReconcilerPeriod:"5m", CompactionPeriod:"10m", EnabledControllers:"node", WorkloadEndpointWorkers:1, ProfileWorkers:1, PolicyWorkers:1, NodeWorkers:1, Kubeconfig:"", HealthEnabled:true, SyncNodeLabels:true, DatastoreType:"kubernetes"}
W1006 15:28:58.612053       1 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
2020-10-06 15:28:58.613 [INFO][1] main.go 109: Ensuring Calico datastore is initialized
2020-10-06 15:28:58.622 [INFO][1] main.go 183: Starting status report routine
2020-10-06 15:28:58.622 [INFO][1] main.go 368: Starting controller ControllerType="Node"
2020-10-06 15:28:58.622 [INFO][1] node_controller.go 133: Starting Node controller
2020-10-06 15:28:58.722 [INFO][1] node_controller.go 146: Node controller is now running
2020-10-06 15:28:58.722 [INFO][1] ipam.go 42: Synchronizing IPAM data
2020-10-06 15:28:58.734 [INFO][1] ipam.go 168: Node and IPAM data is in sync
E1006 15:34:11.045808       1 reflector.go:280] pkg/mod/k8s.io/client-go@v0.0.0-20191114101535-6c5935290e33/tools/cache/reflector.go:96: Failed to watch *v1.Node: Get https://10.152.183.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1381&timeoutSeconds=582&watch=true: dial tcp 10.152.183.1:443: connect: connection refused
E1006 15:34:12.046096       1 reflector.go:280] pkg/mod/k8s.io/client-go@v0.0.0-20191114101535-6c5935290e33/tools/cache/reflector.go:96: Failed to watch *v1.Node: Get https://10.152.183.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1381&timeoutSeconds=499&watch=true: dial tcp 10.152.183.1:443: connect: connection refused
E1006 15:34:25.569767       1 reflector.go:280] pkg/mod/k8s.io/client-go@v0.0.0-20191114101535-6c5935290e33/tools/cache/reflector.go:96: Failed to watch *v1.Node: Get https://10.152.183.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1601&timeoutSeconds=506&watch=true: dial tcp 10.152.183.1:443: connect: connection refused
E1006 15:34:26.570132       1 reflector.go:280] pkg/mod/k8s.io/client-go@v0.0.0-20191114101535-6c5935290e33/tools/cache/reflector.go:96: Failed to watch *v1.Node: Get https://10.152.183.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1601&timeoutSeconds=319&watch=true: dial tcp 10.152.183.1:443: connect: connection refused
2020-10-06 15:34:29.970 [ERROR][1] main.go 234: Failed to reach apiserver error=<nil>

I found the outcome of the story in this excellent article written by Chris McKeown: Enabling the Calico CNI provider with Minikube on Hyper-V
Setting the environment variable FELIX_IGNORELOOSERPF to true did the trick.
I had to restart microk8s.
After, expose port : kubectl -n kube-system port-forward svc/kibana-logging 8001:5601
And ssh tunnel to access localhost on 8001.
So it works.
Thanks to @balchua1 & Chris McKeown

Thanks @geekbot for digging into this. Was wondering if this configuration make sense as a default. I have no idea about the consequence of this. :blush:

thanks for instruction include the idea forward the local port