Thank you @balchua1, I did exactly you commands but it still fails. I suppose it’s a Virtual Machine issue, but it’s the same for all hypervisors I use. Do you have recommendations on network settings for the host which bears microk8s ?
I join the screen captures of what I did and the result. If you see a mistake on my side let me know !!
Thanks a lot
sysadmin@loire:~$ sudo snap install microk8s --channel 1.19/stable --classic
[sudo] password for sysadmin:
microk8s (1.19/stable) v1.19.2 from Canonical✓ installed
sysadmin@loire:~$ microk8s enable fluentd
Insufficient permissions to access MicroK8s.
You can either try again with sudo or add the user sysadmin to the 'microk8s' group:
sudo usermod -a -G microk8s sysadmin
sudo chown -f -R sysadmin ~/.kube
The new group will be available on the user's next login.
sysadmin@loire:~$ sudo microk8s enable fluentd
Enabling Fluentd-Elasticsearch
Labeling nodes
node/loire labeled
Enabling DNS
Applying manifest
serviceaccount/coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
clusterrole.rbac.authorization.k8s.io/coredns created
clusterrolebinding.rbac.authorization.k8s.io/coredns created
Restarting kubelet
DNS is enabled
service/elasticsearch-logging created
serviceaccount/elasticsearch-logging created
clusterrole.rbac.authorization.k8s.io/elasticsearch-logging created
clusterrolebinding.rbac.authorization.k8s.io/elasticsearch-logging created
statefulset.apps/elasticsearch-logging created
configmap/fluentd-es-config-v0.2.0 created
serviceaccount/fluentd-es created
clusterrole.rbac.authorization.k8s.io/fluentd-es created
clusterrolebinding.rbac.authorization.k8s.io/fluentd-es created
daemonset.apps/fluentd-es-v3.0.2 created
deployment.apps/kibana-logging created
service/kibana-logging created
Fluentd-Elasticsearch is enabled
sysadmin@loire:~$ sudo microk8s kubectl proxy
Starting to serve on 127.0.0.1:8001
By the way this command gives an astonishing result:
sysadmin@loire:~$ microk8s status --wait-ready
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
dns # CoreDNS
fluentd # Elasticsearch-Fluentd-Kibana logging and monitoring
ha-cluster # Configure high availability on the current node
disabled:
ambassador # Ambassador API Gateway and Ingress
cilium # SDN, fast with full network policy
dashboard # The Kubernetes dashboard
gpu # Automatic enablement of Nvidia CUDA
helm # Helm 2 - the package manager for Kubernetes
helm3 # Helm 3 - Kubernetes package manager
host-access # Allow Pods connecting to Host services smoothly
ingress # Ingress controller for external access
istio # Core Istio service mesh services
jaeger # Kubernetes Jaeger operator with its simple config
knative # The Knative framework on Kubernetes.
kubeflow # Kubeflow for easy ML deployments
linkerd # Linkerd is a service mesh for Kubernetes and other frameworks
metallb # Loadbalancer for your Kubernetes cluster
metrics-server # K8s Metrics Server for API access to service metrics
multus # Multus CNI enables attaching multiple network interfaces to pods
prometheus # Prometheus operator for monitoring and logging
rbac # Role-Based Access Control for authorisation
registry # Private image registry exposed on localhost:32000
storage # Storage class; allocates storage from host directory
How can your install work without RBAC enabled?
I am confused
I did a fresh install on Hyper-V and it’s really a node taint/toleration issue.
Elasticsearch is not scheduled because the network is unreachable.
I suspect a Calico incompatibility with the Hyper-V network configuration.
Is there anybody who found the way to configure the network so that Es can reach it ?
Can microk8s work in a VM?
elasticsearch-logging-token-6nzp7:
Type: Secret (a volume populated by a Secret)
SecretName: elasticsearch-logging-token-6nzp7
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
I found the outcome of the story in this excellent article written by Chris McKeown: Enabling the Calico CNI provider with Minikube on Hyper-V
Setting the environment variable FELIX_IGNORELOOSERPF to true did the trick.
I had to restart microk8s.
After, expose port : kubectl -n kube-system port-forward svc/kibana-logging 8001:5601
And ssh tunnel to access localhost on 8001.
So it works.
Thanks to @balchua1 & Chris McKeown