The proper way to access kibana is actually to do this:
$ microk8s kubectl cluster-info
Kubernetes control plane is running at https://192.168.1.1:16443
...
Kibana is running at https://192.168.116.50:16443/api/v1/namespaces/kube-system/services/kibana-logging/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Open the link mentioned in the reply of cluster-info.
For this to work, you fist need to modify Kibana’s deployment as the microk8s add-on is wrongly configured:
$ microk8s kubectl set env deployments.apps kibana-logging \
SERVER_BASEPATH=/api/v1/namespaces/kube-system/services/kibana-logging/proxy
I am not sure if I should modify the wiki or if someone else is responsible for that. Please advise.
@gildas it is published as a wiki so anyone can contribute if you want to. I do check over the additions
however, if there is a bug in the deployment we should probably fix that rather than document it @kjackal ?
@evilnick, sure, but I don’t want to overstep and break things that were done other ways before (as @balchua1 mentions).
That said, I installed a brand new microk8s and enabled fluentd. The kubectl proxy mentioned in the wiki didn’t work. I could see in the web console the pages trying to get js/css resources using the wrong path.
That’s why I decided to explore the kubectl cluster-info method and realized the kibana deployment was just an environment variable away from working.
IMHO, I kind of like the cluster-info as I don’t have to remember to run a kubectl proxy or kubectl port-forward in another shell. Plus, as it is deployed today, the add-on adds kibana in the cluster-info, so best to use it.
The drawback of cluster-info lies in the self-signed certificate that modern browsers really do not like.
If you’re not going to index too many logs(few MBs), the default settings configured for elasticsearch seems ok to me.
Here’s what i notice about elasticsearch, it uses a good amount of memory and IO if you’re indexing lots of your logs. Fluentd isn’t that of a resource hog.
In many cases I’ve seen, elastic is often allocated to its own dedicated node. But these are heavily used elastics.
Another option that i tend to use lately is Loki The logging stuffs from grafana. A bit less storage needs but also less features unlike a full text search engine like elastic.
Thanks for the report @miah0x41 - it seems the upstream docs have had a clearout and no longer document Kibana, so I removed the link. The official docs are still good and there are loads of tutorials for kibana if you do a quick search.