Kibana returns : Kibana did not load properly. Check the server output for more information

Hi,
I have setup a fresh install : sudo snap install microk8s --classic --channel=1.19
fluentd is enabled (also enabled: dns, storage, ingress, rbac)
Proxy runs on 8001

Connection to http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kibana-logging/proxy/app/kibana return a red strip with “Kibana did not load properly. Check the server output for more information.”
I looked at various mailing lists, but I don’t see from where the issue comes.

Looking at the Kibana pod logs, there is a 304 error message:
“res”:{“statusCode”:304,“responseTime”:24,“contentLength”:9},“message”:“GET /bundles/app/kibana/bootstrap.js 304 24ms - 9.0B”}

Any idea is welcome.
GB

Hi,
the problem comes probably from a bad install of Kibana. Files are missing or not accessible as shown on this network log. So the Kibana docker image seems not functional.
I suggest that Ubuntu retests this config and may be changes the image reference :


Any advice is welcome

Thanks

The fluentd manifests are taken from upstream kubernetes here

Can you try port forwarding?

@balchua1, do you mean ssh port forward ? Yes, it’s already the case, I access Kibana with http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kibana-logging/proxy/app/kibana

I tried other Kibana images without success:

  • 7.9.2 crashes because “Liveness probe failed: HTTP probe failed with statuscode: 503”
  • 6.3.2 is not compatible with new ES deployed in microk8s 1.19

So the solution is either:

  • Find a Kibana image which works and which is compatible with the ES deployed by microk8s 1.19 add-on
    or
  • Come back to the microk8s 1.18 configuration
    Can you advice ?
    Thanks a lot
    GB

I meant kubectl port-forward svc/kibana [your hostport]:[kibana-port] :blush:

I wonder why it crash loops. Hmm. Can you create a github issue and upload the inspect tarball?

When i upgraded the fluentd addon, i did test it accessing the kibana dashboard. Strange

1 Like

I would be delighted to send you the inspect tarball but we saw that it contains some confidential information relative to machine names, secret names… I can’t issue this document publicly.
What can I do to help? Is it useful to send only the files that do not contain sensitive information?
Sorry for the inconvenience.
Best regards
GB

By the way, I re-enabled the add-on but it is still the same on port 8001
On port 5601, the home page is displayed correctly, but the platform logs are not accessible.
So it means that the “missing files” are present on the docker image. May be a missconfiguration on the proxy ?
I attach a screen capture on the 5601 access.
Let me know if you need some more tests
GB

I tried to connect Kibana to the ES server but it failed.
I checked whether ES exposes its service. It is the case, I managed to a wget 10.152.183.36:9200 and ES answered with 200
So Kibana complains that it cannot access to ES bit ES exposes correctly (I believe the service).
Rather strange…
Eventually, I believe that the ES log index has not been created during setup.

When one disable the fluentd plugin, an error message is displayed. It says that a fluentd configmap is missing.
HTH

Thanks for providing more information.
Few questions.
What error do you get when disabling the addon?

I believe the index is created when fluentd starts communicating with elastic. Can you check fluentd logs and hopefully it reveals something?
Thanks again.

On a fresh install 1.19 in an HyperV VM on Ubuntu 20.04, updates installed,
Fluentd even doesn’t start:

And on disable is sends::

HTH

We shouldn’t print those error out, it is for those who previously enabled fluentd addon and wanted to disable them.

Those errors that you see during disable are benign. What im more concerned is that fluentd doesn’t start as shown in your screenshot.

Can you do kubectl -n kube-system describe [fluentd pod] lets see what is keeping it from not progressing.

I am just noticing that’s may be a lack of resource in the test VM. I attach the describe texts. However, on a correctly sized machine, the problem described above actually exists.
I resize the VM and send new describe info.
Sorry for the inconvenience
GB
Name: fluentd-es-v3.0.2-w6vx2
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Node: danube/192.168.42.136
Start Time: Wed, 30 Sep 2020 16:04:49 +0200
Labels: controller-revision-hash=94d946f7d
k8s-app=fluentd-es
pod-template-generation=1
version=v3.0.2
Annotations: cni.projectcalico.org/podIP: 10.1.155.71/32
cni.projectcalico.org/podIPs: 10.1.155.71/32
seccomp.security.alpha.kubernetes.io/pod: docker/default
Status: Running
IP: 10.1.155.71
IPs:
IP: 10.1.155.71
Controlled By: DaemonSet/fluentd-es-v3.0.2
Containers:
fluentd-es:
Container ID: containerd://c5e72fe3fe39f5bfca38f21e64ecf0d8a9499456f666dc8a166b6495c49446ba
Image: quay.io/fluentd_elasticsearch/fluentd:v3.0.2
Image ID: quay.io/fluentd_elasticsearch/fluentd@sha256:7773f9dcabaf1b48d27238c500107aa0498fe04134508548ee537c74598ddfff
Port: 24231/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 30 Sep 2020 16:13:01 +0200
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Wed, 30 Sep 2020 16:10:36 +0200
Finished: Wed, 30 Sep 2020 16:11:36 +0200
Ready: False
Restart Count: 6
Limits:
memory: 500Mi
Requests:
cpu: 100m
memory: 200Mi
Liveness: tcp-socket :prometheus delay=5s timeout=10s period=10s #success=1 #failure=3
Readiness: tcp-socket :prometheus delay=5s timeout=10s period=10s #success=1 #failure=3
Environment:
FLUENTD_ARGS: --no-supervisor -q
Mounts:
/etc/fluent/config.d from config-volume (rw)
/var/lib/docker/containers from varlibdockercontainers (ro)
/var/log from varlog (rw)
/var/run/secrets/kubernetes.io/serviceaccount from fluentd-es-token-65p74 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
varlog:
Type: HostPath (bare host directory volume)
Path: /var/log
HostPathType:
varlibdockercontainers:
Type: HostPath (bare host directory volume)
Path: /var/snap/microk8s/common/var/lib/containerd
HostPathType:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: fluentd-es-config-v0.2.0
Optional: false
fluentd-es-token-65p74:
Type: Secret (a volume populated by a Secret)
SecretName: fluentd-es-token-65p74
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message


Normal Scheduled 8m30s default-scheduler Successfully assigned kube-system/fluentd-es-v3.0.2-w6vx2 to danube
Normal Pulling 8m26s kubelet Pulling image “quay.io/fluentd_elasticsearch/fluentd:v3.0.2
Normal Pulled 7m42s kubelet Successfully pulled image “quay.io/fluentd_elasticsearch/fluentd:v3.0.2” in 44.161307816s
Normal Pulled 6m43s kubelet Container image “quay.io/fluentd_elasticsearch/fluentd:v3.0.2” already present on machine
Normal Created 6m43s (x2 over 7m42s) kubelet Created container fluentd-es
Normal Started 6m43s (x2 over 7m42s) kubelet Started container fluentd-es
Normal Killing 6m13s (x2 over 7m13s) kubelet Container fluentd-es failed liveness probe, will be restarted
Warning Unhealthy 5m52s (x10 over 7m32s) kubelet Readiness probe failed: dial tcp 10.1.155.71:24231: connect: connection refused
Warning Unhealthy 3m23s (x14 over 7m33s) kubelet Liveness probe failed: dial tcp 10.1.155.71:24231: connect: connection refused

Is it possible to get the fluentd logs? kubectl -n kube-system logs ...
It will be easier to read if you post the logs using markdown code snippets.
Thank you for your patience.

Here is the log

sysadmin@danuble:~$ kubectl -n kube-system logs fluentd-es-v3.0.2-2cptn
/usr/local/bundle/gems/kubeclient-4.6.0/lib/kubeclient.rb:27: warning: Using the last argument as keyword parameters is deprecated; maybe ** should be added to the call
/usr/local/bundle/gems/kubeclient-4.6.0/lib/kubeclient/common.rb:61: warning: The called method `initialize_client' is defined here
2020-09-30 19:09:21 +0000 [warn]: [elasticsearch] Could not communicate to Elasticsearch, resetting connection and trying again. connect_write timeout reached
2020-09-30 19:09:21 +0000 [warn]: [elasticsearch] Remaining retry: 14. Retry to communicate after 2 second(s).

And the ES log:

+ export NODE_NAME=elasticsearch-logging-0
+ NODE_NAME=elasticsearch-logging-0
+ export NODE_MASTER=true
+ NODE_MASTER=true
+ export NODE_DATA=true
+ NODE_DATA=true
+ export HTTP_PORT=9200
+ HTTP_PORT=9200
+ export TRANSPORT_PORT=9300
+ TRANSPORT_PORT=9300
+ export MINIMUM_MASTER_NODES=1
+ MINIMUM_MASTER_NODES=1
+ chown -R elasticsearch:elasticsearch /data
+ ./bin/elasticsearch_logging_discovery
I0930 19:10:54.168831       9 elasticsearch_logging_discovery.go:85] Kubernetes Elasticsearch logging discovery
I0930 19:10:54.179870       9 elasticsearch_logging_discovery.go:142] Found []
I0930 19:11:04.182707       9 elasticsearch_logging_discovery.go:142] Found ["10.1.136.135"]
I0930 19:11:04.182741       9 elasticsearch_logging_discovery.go:153] Endpoints = ["10.1.136.135"]
+ exec su elasticsearch -c /usr/local/bin/docker-entrypoint.sh
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
[2020-09-30T19:11:06,702][WARN ][o.e.c.l.LogConfigurator  ] [elasticsearch-logging-0] Some logging configurations have %marker but don't have %node_name. We will automatically add %node_name to the pattern to ease the migration for users who customize log4j2.properties but will stop this behavior in 7.0. You should manually replace `%node_name` with `[%node_name]%marker ` in these locations:
  /usr/share/elasticsearch/config/log4j2.properties
[2020-09-30T19:11:07,031][INFO ][o.e.e.NodeEnvironment    ] [elasticsearch-logging-0] using [1] data paths, mounts [[/data (/dev/sda2)]], net usable_space [107.5gb], net total_space [124gb], types [ext4]
[2020-09-30T19:11:07,067][INFO ][o.e.e.NodeEnvironment    ] [elasticsearch-logging-0] heap size [1015.6mb], compressed ordinary object pointers [true]
[2020-09-30T19:11:07,070][INFO ][o.e.n.Node               ] [elasticsearch-logging-0] node name [elasticsearch-logging-0], node ID [2jVwW_V7Sd2pN_KazpSQDg], cluster name [kubernetes-logging]
[2020-09-30T19:11:07,070][INFO ][o.e.n.Node               ] [elasticsearch-logging-0] version[7.4.2], pid[15], build[oss/docker/2f90bbf7b93631e52bafb59b3b049cb44ec25e96/2019-10-28T20:40:44.881551Z], OS[Linux/4.15.0-118-generic/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/13.0.1/13.0.1+9]
[2020-09-30T19:11:07,070][INFO ][o.e.n.Node               ] [elasticsearch-logging-0] JVM home [/usr/share/elasticsearch/jdk]
[2020-09-30T19:11:07,070][INFO ][o.e.n.Node               ] [elasticsearch-logging-0] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-1281634468571725669, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -Des.cgroups.hierarchy.override=/, -Dio.netty.allocator.type=unpooled, -XX:MaxDirectMemorySize=536870912, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=oss, -Des.distribution.type=docker, -Des.bundled_jdk=true]
[2020-09-30T19:11:08,282][INFO ][o.e.p.PluginsService     ] [elasticsearch-logging-0] loaded module [aggs-matrix-stats]
[2020-09-30T19:11:08,282][INFO ][o.e.p.PluginsService     ] [elasticsearch-logging-0] loaded module [analysis-common]
[2020-09-30T19:11:08,282][INFO ][o.e.p.PluginsService     ] [elasticsearch-logging-0] loaded module [ingest-common]
[2020-09-30T19:11:08,282][INFO ][o.e.p.PluginsService     ] [elasticsearch-logging-0] loaded module [ingest-geoip]
[2020-09-30T19:11:08,282][INFO ][o.e.p.PluginsService     ] [elasticsearch-logging-0] loaded module [ingest-user-agent]
[2020-09-30T19:11:08,282][INFO ][o.e.p.PluginsService     ] [elasticsearch-logging-0] loaded module [lang-expression]
[2020-09-30T19:11:08,282][INFO ][o.e.p.PluginsService     ] [elasticsearch-logging-0] loaded module [lang-mustache]
[2020-09-30T19:11:08,282][INFO ][o.e.p.PluginsService     ] [elasticsearch-logging-0] loaded module [lang-painless]
[2020-09-30T19:11:08,282][INFO ][o.e.p.PluginsService     ] [elasticsearch-logging-0] loaded module [mapper-extras]
[2020-09-30T19:11:08,282][INFO ][o.e.p.PluginsService     ] [elasticsearch-logging-0] loaded module [parent-join]
[2020-09-30T19:11:08,282][INFO ][o.e.p.PluginsService     ] [elasticsearch-logging-0] loaded module [percolator]
[2020-09-30T19:11:08,282][INFO ][o.e.p.PluginsService     ] [elasticsearch-logging-0] loaded module [rank-eval]
[2020-09-30T19:11:08,282][INFO ][o.e.p.PluginsService     ] [elasticsearch-logging-0] loaded module [reindex]
[2020-09-30T19:11:08,283][INFO ][o.e.p.PluginsService     ] [elasticsearch-logging-0] loaded module [repository-url]
[2020-09-30T19:11:08,283][INFO ][o.e.p.PluginsService     ] [elasticsearch-logging-0] loaded module [transport-netty4]
[2020-09-30T19:11:08,283][INFO ][o.e.p.PluginsService     ] [elasticsearch-logging-0] no plugins loaded
[2020-09-30T19:11:13,022][INFO ][o.e.d.DiscoveryModule    ] [elasticsearch-logging-0] using discovery type [zen] and seed hosts providers [settings]
[2020-09-30T19:11:13,827][INFO ][o.e.n.Node               ] [elasticsearch-logging-0] initialized
[2020-09-30T19:11:13,827][INFO ][o.e.n.Node               ] [elasticsearch-logging-0] starting ...
[2020-09-30T19:11:14,131][INFO ][o.e.t.TransportService   ] [elasticsearch-logging-0] publish_address {10.1.136.135:9300}, bound_addresses {10.1.136.135:9300}
[2020-09-30T19:11:14,155][INFO ][o.e.b.BootstrapChecks    ] [elasticsearch-logging-0] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2020-09-30T19:11:14,196][INFO ][o.e.c.c.Coordinator      ] [elasticsearch-logging-0] setting initial configuration to VotingConfiguration{2jVwW_V7Sd2pN_KazpSQDg}
[2020-09-30T19:11:14,393][INFO ][o.e.c.s.MasterService    ] [elasticsearch-logging-0] elected-as-master ([1] nodes joined)[{elasticsearch-logging-0}{2jVwW_V7Sd2pN_KazpSQDg}{QQmPAS2hQKmY2E3PBByFqw}{10.1.136.135}{10.1.136.135:9300}{dim} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 1, version: 1, reason: master node changed {previous [], current [{elasticsearch-logging-0}{2jVwW_V7Sd2pN_KazpSQDg}{QQmPAS2hQKmY2E3PBByFqw}{10.1.136.135}{10.1.136.135:9300}{dim}]}
[2020-09-30T19:11:14,452][INFO ][o.e.c.c.CoordinationState] [elasticsearch-logging-0] cluster UUID set to [xHjNlCnrRzG58N26P04Xlw]
[2020-09-30T19:11:14,479][INFO ][o.e.c.s.ClusterApplierService] [elasticsearch-logging-0] master node changed {previous [], current [{elasticsearch-logging-0}{2jVwW_V7Sd2pN_KazpSQDg}{QQmPAS2hQKmY2E3PBByFqw}{10.1.136.135}{10.1.136.135:9300}{dim}]}, term: 1, version: 1, reason: Publication{term=1, version=1}
[2020-09-30T19:11:14,556][INFO ][o.e.g.GatewayService     ] [elasticsearch-logging-0] recovered [0] indices into cluster_state
[2020-09-30T19:11:14,557][INFO ][o.e.h.AbstractHttpServerTransport] [elasticsearch-logging-0] publish_address {10.1.136.135:9200}, bound_addresses {10.1.136.135:9200}
[2020-09-30T19:11:14,557][INFO ][o.e.n.Node               ] [elasticsearch-logging-0] started
[2020-09-30T19:11:16,914][INFO ][o.e.c.m.MetaDataCreateIndexService] [elasticsearch-logging-0] [.kibana_1] creating index, cause [api], templates [], shards [1]/[1], mappings [_doc]
[2020-09-30T19:11:16,969][INFO ][o.e.c.r.a.AllocationService] [elasticsearch-logging-0] updating number_of_replicas to [0] for indices [.kibana_1]
[2020-09-30T19:11:17,333][INFO ][o.e.c.r.a.AllocationService] [elasticsearch-logging-0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_1][0]]]).
[2020-09-30T19:11:20,220][WARN ][o.e.d.a.b.BulkRequestParser] [elasticsearch-logging-0] [types removal] Specifying types in bulk requests is deprecated.
[2020-09-30T19:11:20,240][INFO ][o.e.c.m.MetaDataCreateIndexService] [elasticsearch-logging-0] [logstash-2020.09.30] creating index, cause [auto(bulk api)], templates [], shards [1]/[1], mappings []
[2020-09-30T19:11:20,538][INFO ][o.e.c.m.MetaDataMappingService] [elasticsearch-logging-0] [logstash-2020.09.30/8ibgjfuCQPme_HOmS7_M_A] create_mapping [_doc]
[2020-09-30T19:11:20,590][INFO ][o.e.c.m.MetaDataMappingService] [elasticsearch-logging-0] [logstash-2020.09.30/8ibgjfuCQPme_HOmS7_M_A] update_mapping [_doc]
[2020-09-30T19:11:21,237][INFO ][o.e.c.m.MetaDataMappingService] [elasticsearch-logging-0] [logstash-2020.09.30/8ibgjfuCQPme_HOmS7_M_A] update_mapping [_doc]
[2020-09-30T19:11:22,272][INFO ][o.e.c.m.MetaDataMappingService] [elasticsearch-logging-0] [logstash-2020.09.30/8ibgjfuCQPme_HOmS7_M_A] update_mapping [_doc]
[2020-09-30T19:11:23,252][INFO ][o.e.c.m.MetaDataMappingService] [elasticsearch-logging-0] [logstash-2020.09.30/8ibgjfuCQPme_HOmS7_M_A] update_mapping [_doc]
[2020-09-30T19:11:23,310][INFO ][o.e.c.m.MetaDataMappingService] [elasticsearch-logging-0] [logstash-2020.09.30/8ibgjfuCQPme_HOmS7_M_A] update_mapping [_doc]
[2020-09-30T19:11:23,403][INFO ][o.e.c.m.MetaDataMappingService] [elasticsearch-logging-0] [logstash-2020.09.30/8ibgjfuCQPme_HOmS7_M_A] update_mapping [_doc]

Services seem correctly exposed:

 sysadmin@danuble:~$ kubectl -n kube-system get svc
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
kube-dns                ClusterIP   10.152.183.10    <none>        53/UDP,53/TCP,9153/TCP   60m
elasticsearch-logging   ClusterIP   10.152.183.38    <none>        9200/TCP                 58m
kibana-logging          ClusterIP   10.152.183.142   <none>        5601/TCP                 58m

Elasticsearch 9200 responds to get:

Connecting to 10.152.183.38:9200... connected.
HTTP request sent, awaiting response... 200 OK
Length: 552 [application/json]
Saving to: ‘index.html’

index.html                     100%[====================================================>]     552  --.-KB/s    in 0s

2020-09-30 20:10:41 (188 MB/s) - ‘index.html’ saved [552/552]

I looked at the configmap of fluentd : https://github.com/ubuntu/microk8s/blob/master/microk8s-resources/actions/fluentd/fluentd-es-configmap.yaml and it seems correct, port 9200, elasticsearch-logging.

What can I do to help debugging ?
Best regards.
GB

Im going to ask a stupid question. Do you have the dns enabled?

Update: answering my own question. The dns is enabled by the fluentd addon when its not. And its in one of your response. :blush:

I couldn’t figure out why fluentd’s health check isnt working.

@geekbot do mind editing the fluentd DaemonSet and remove the liveness and readiness probes?

Apologies for the inconvenience.

Am I the only person who faces this issue ?
On my side the problem is very repeatable, in Azure, in VirtualBox, on Hyper-V. Also repeatable on Ubuntu 18.04 an 20.04.

I try to remove the liveness and readiness probes.

I did remove the probes and redeployed fluentd ds.

Here is the fluentd log

sysadmin@danuble:~$ kubectl -n kube-system logs fluentd-es-v3.0.2-fsh82
/usr/local/bundle/gems/kubeclient-4.6.0/lib/kubeclient.rb:27: warning: Using the last argument as keyword parameters is deprecated; maybe ** should be added to the call
/usr/local/bundle/gems/kubeclient-4.6.0/lib/kubeclient/common.rb:61: warning: The called method `initialize_client' is defined here
/usr/local/bundle/gems/fluentd-1.9.3/lib/fluent/plugin_helper/http_server/compat/server.rb:84: warning: Using the last argument as keyword parameters is deprecated; maybe ** should be added to the call
/usr/local/bundle/gems/fluentd-1.9.3/lib/fluent/plugin_helper/http_server/compat/webrick_handler.rb:26: warning: The called method `build' is defined here
2020-10-01 15:31:32 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush: kernel:default" location=nil tag="kernel" time=2020-10-01 15:31:32.677812031 +0000 record={"priority"=>"6", "boot_id"=>"95f70d663f7f445f9711553a2f0eb173", "machine_id"=>"0a5d85bf7d954cccbb8b721080e05441", "hostname"=>"danuble", "source_monotonic_timestamp"=>"18242426039", "transport"=>"kernel", "syslog_facility"=>"0", "syslog_identifier"=>"kernel", "message"=>"IPv6: ADDRCONF(NETDEV_UP): eth0: link is not readyIPv6: ADDRCONF(NETDEV_UP): caliaa363f64fa5: link is not readyIPv6: ADDRCONF(NETDEV_CHANGE): caliaa363f64fa5: link becomes readyIPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready"}
2020-10-01 15:31:32.677894132 +0000 fluent.warn: {"error":"#<Fluent::Plugin::ConcatFilter::TimeoutError: Timeout flush: kernel:default>","location":null,"tag":"kernel","time":1601566292,"record":{"priority":"6","boot_id":"95f70d663f7f445f9711553a2f0eb173","machine_id":"0a5d85bf7d954cccbb8b721080e05441","hostname":"danuble","source_monotonic_timestamp":"18242426039","transport":"kernel","syslog_facility":"0","syslog_identifier":"kernel","message":"IPv6: ADDRCONF(NETDEV_UP): eth0: link is not readyIPv6: ADDRCONF(NETDEV_UP): caliaa363f64fa5: link is not readyIPv6: ADDRCONF(NETDEV_CHANGE): caliaa363f64fa5: link becomes readyIPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready"},"message":"dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error=\"Timeout flush: kernel:default\" location=nil tag=\"kernel\" time=2020-10-01 15:31:32.677812031 +0000 record={\"priority\"=>\"6\", \"boot_id\"=>\"95f70d663f7f445f9711553a2f0eb173\", \"machine_id\"=>\"0a5d85bf7d954cccbb8b721080e05441\", \"hostname\"=>\"danuble\", \"source_monotonic_timestamp\"=>\"18242426039\", \"transport\"=>\"kernel\", \"syslog_facility\"=>\"0\", \"syslog_identifier\"=>\"kernel\", \"message\"=>\"IPv6: ADDRCONF(NETDEV_UP): eth0: link is not readyIPv6: ADDRCONF(NETDEV_UP): caliaa363f64fa5: link is not readyIPv6: ADDRCONF(NETDEV_CHANGE): caliaa363f64fa5: link becomes readyIPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready\"}"}

It is a network problem. I checked the NIC, eth0 is working correctly. Here is the ifconfig.

sysadmin@danuble:~$ ifconfig
cali70cea27912b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1440
        inet6 fe80::ecee:eeff:feee:eeee  prefixlen 64  scopeid 0x20<link>
        ether ee:ee:ee:ee:ee:ee  txqueuelen 0  (Ethernet)
        RX packets 1707  bytes 114454 (114.4 KB)
        RX errors 0  dropped 2  overruns 0  frame 0
        TX packets 1570  bytes 103336 (103.3 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

cali982e8c853df: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1440
        inet6 fe80::ecee:eeff:feee:eeee  prefixlen 64  scopeid 0x20<link>
        ether ee:ee:ee:ee:ee:ee  txqueuelen 0  (Ethernet)
        RX packets 38441  bytes 3645786 (3.6 MB)
        RX errors 0  dropped 2  overruns 0  frame 0
        TX packets 42921  bytes 14679373 (14.6 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

calia6ecaf1e34b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1440
        inet6 fe80::ecee:eeff:feee:eeee  prefixlen 64  scopeid 0x20<link>
        ether ee:ee:ee:ee:ee:ee  txqueuelen 0  (Ethernet)
        RX packets 22224  bytes 4369040 (4.3 MB)
        RX errors 0  dropped 2  overruns 0  frame 0
        TX packets 33273  bytes 9177453 (9.1 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

caliaa363f64fa5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1440
        inet6 fe80::ecee:eeff:feee:eeee  prefixlen 64  scopeid 0x20<link>
        ether ee:ee:ee:ee:ee:ee  txqueuelen 0  (Ethernet)
        RX packets 364  bytes 138886 (138.8 KB)
        RX errors 0  dropped 2  overruns 0  frame 0
        TX packets 286  bytes 166290 (166.2 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

calic3d36d58a37: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1440
        inet6 fe80::ecee:eeff:feee:eeee  prefixlen 64  scopeid 0x20<link>
        ether ee:ee:ee:ee:ee:ee  txqueuelen 0  (Ethernet)
        RX packets 43905  bytes 8529106 (8.5 MB)
        RX errors 0  dropped 2  overruns 0  frame 0
        TX packets 40055  bytes 4693541 (4.6 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

calic50351e6f31: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1440
        inet6 fe80::ecee:eeff:feee:eeee  prefixlen 64  scopeid 0x20<link>
        ether ee:ee:ee:ee:ee:ee  txqueuelen 0  (Ethernet)
        RX packets 9509  bytes 864883 (864.8 KB)
        RX errors 0  dropped 2  overruns 0  frame 0
        TX packets 10100  bytes 3310696 (3.3 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.42.149  netmask 255.255.255.0  broadcast 192.168.42.255
        inet6 fe80::215:5dff:fe2a:a01  prefixlen 64  scopeid 0x20<link>
        ether 00:15:5d:2a:0a:01  txqueuelen 1000  (Ethernet)
        RX packets 216958  bytes 309494752 (309.4 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 90078  bytes 6635719 (6.6 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 3236489  bytes 1514036181 (1.5 GB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3236489  bytes 1514036181 (1.5 GB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vxlan.calico: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1410
        inet 10.1.136.128  netmask 255.255.255.255  broadcast 10.1.136.128
        inet6 fe80::6476:61ff:fe2c:2088  prefixlen 64  scopeid 0x20<link>
        ether 66:76:61:2c:20:88  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 7 overruns 0  carrier 0  collisions 0

I couldnt reproduce this. :(, but im running on a linux box directly.

$ kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-847c8c99d-xg9rr   1/1     Running   2          122m
kube-system   calico-node-zpcgb                         1/1     Running   3          122m
kube-system   coredns-86f78bb79c-wvj6g                  1/1     Running   2          122m
kube-system   elasticsearch-logging-0                   1/1     Running   1          121m
kube-system   kibana-logging-7cf6dc4687-gzztc           1/1     Running   4          121m
kube-system   fluentd-es-v3.0.2-prh7v                   1/1     Running   5          121m

I just did

$ sudo snap install microk8s --channel 1.19/stable --classic
$ #wait for microk8s to be up.
$ microk8s enable fluentd