I really love to use microk8s, but I am a little bit worried about the current state of the project. What baffles me, is that the main product cannot be installed on the flagship distribution since january , which is two upstream k8s releases!
The most basic smoke test of microk8s would cover such case, I suppose.
Additionally, there are numerous issues hanging around that doesn’t let me sleep well either, like for example
opened 08:20AM - 08 Jun 22 UTC
#### Summary
I've setup a 4 node microk8s cluster on bare metal machines. Every… now and then the `/snap/microk8s/3204/bin/k8s-dqlite` process will spike one of the cores on one of my nodes to 100% usage, sending my fans into overdrive.
I can see the ram usage is low and all the other cores are running at <6% usage, and RAM is hardly used:
![htop](https://user-images.githubusercontent.com/88759430/172596283-0c5a7978-2ed8-4a22-aed2-5f62a26ef494.png)
The specs of the machines are as follows:
- Node 1:
- CPU: AMD Threadripper 1950X
- RAM: 64GB
- Node 2:
- CPU: i7-7820X
- RAM: 64
- Node 3:
- CPU: i7-9700
- RAM: 32
- Node 4:
- CPU: i7-9700K
- RAM: 64
The cluster has the metallb, dns, rbac, and storage enabled.
I've also deployed Rook-Ceph on the cluster.
#### What Should Happen Instead?
It shouldn't be using over 100% of a core.
#### Reproduction Steps
1. Create a microk8s cluster
2. Deploy Rook-Ceph
3. Wait a bit.
4. I'm not sure how to properly reproduce this issue...
#### Introspection Report
[inspection-report-20220608_143601.tar.gz](https://github.com/canonical/microk8s/files/8859648/inspection-report-20220608_143601.tar.gz)
opened 08:11PM - 18 Nov 23 UTC
<!--
Thank you for submitting an issue. Please fill in the template below
… information about the bug you encountered.
-->
#### Summary
Backup consists of a 213-215 byte tar.gz file containing nothing. This started happening after upgrading to 1.27
Running `/snap/microk8s/current/bin/migrator --endpoint unix://${SNAP_DATA}/var/kubernetes/backend/kine.sock:12379 --mode backup-dqlite --db-dir ./` hangs for more than a day and never completes.
I created another issue but responses stopped 3 weeks ago. https://github.com/canonical/microk8s/issues/4259
#### What Should Happen Instead?
The backup should work and probably (and statistically) be around 90mb.
#### Reproduction Steps
- Create script.sh with the following content
```
#!/bin/bash
if [ "$1" = "" ] ; then
echo "Provide the server's IP address as the first parameter"
exit 1
fi
cd
wget https://github.com/franco-martin/test-franco/releases/download/microk8s-4308/small-broken-backup.tar.gz
snap install microk8s --classic --channel=1.28/stable
microk8s stop
cp small-broken-backup.tar.gz /var/snap/microk8s/current/var/kubernetes/
cd /var/snap/microk8s/current/var/kubernetes/
mv backend backend_2
tar -xzvf small-broken-backup.tar.gz
IP=$1
sed -i s/192.168.1.79/$IP/g backend/localnode.yaml
sed -i s/192.168.1.79/$IP/g backend/info.yaml
sed -i s/192.168.1.79/$IP/g backend/cluster.yaml
/snap/microk8s/current/bin/dqlite -s 127.0.0.1:19001 -c /var/snap/microk8s/current/var/kubernetes/backend/cluster.crt -k /var/snap/microk8s/current/var/kubernetes/backend/cluster.key k8s ".reconfigure /var/snap/microk8s/current/var/kubernetes/backend/ /var/snap/microk8s/current/var/kubernetes/backend/cluster.yaml"
cd
microk8s start
```
- Get the IP of your server and run `bash ./script YOUR_IP`
- Verify the backup was restored by running `microk8s kubectl get nodes`. You should have two nodes.
- Run a backup
- Verify its size
#### Introspection Report
[inspection-report-20231019_220512.tar.gz](https://github.com/canonical/microk8s/files/13048110/inspection-report-20231019_220512.tar.gz)
#### Can you suggest a fix?
#### Are you interested in contributing with a fix?
Ill help with whatever I can
opened 05:42PM - 09 Oct 23 UTC
I have created my cluster several times in microk8s but after 2 or 3 weeks the m… aster node stops working well, only the service does not start and neither do the nodes, it works fine for a while, I have made the tarball and I have inspected it but I cannot find nothing specific:
1.- microk8s status
microk8s is not running. Use microk8s inspect for a deeper inspection.
2.- error: Get “https://127.0.0.1:16443/api/v1/namespaces/kube-system/services?labelSelector=kubernetes.io%2Fcluster-service%3Dtrue”: dial tcp 127.0.0.1:16443: connect: connection refused - error from a previous attempt: unexpected EOF
3.- microk8s kubectl logs -n kube-system -l component=kube-apiserver
error: Get “https://127.0.0.1:16443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver”: dial tcp 127.0.0.1:16443: connect: connection refused - error from a previous attempt: unexpected EOF
4.- microk8s.inspect (all is running):
Inspecting system
Inspecting Certificates
Inspecting services
Service snap.microk8s.daemon-cluster-agent is running
Service snap.microk8s.daemon-containerd is running
Service snap.microk8s.daemon-kubelite is running
Service snap.microk8s.daemon-k8s-dqlite is running
Service snap.microk8s.daemon-apiserver-kicker is running
Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
Copy processes list to the final report tarball
Copy disk usage information to the final report tarball
Copy memory usage information to the final report tarball
Copy server uptime to the final report tarball
Copy openSSL information to the final report tarball
Copy snap list to the final report tarball
Copy VM name (or none) to the final report tarball
Copy current linux distribution to the final report tarball
Copy asnycio usage and limits to the final report tarball
Copy inotify max_user_instances and max_user_watches to the final report tarball
Copy network configuration to the final report tarball
Inspecting kubernetes cluster
Inspect kubernetes cluster
Inspecting dqlite
Inspect dqlite
[inspection-report.zip](https://github.com/canonical/microk8s/files/12848939/inspection-report.zip)
So my main question is: can we really rely on microk8s for business critical use cases? Or is it intended for makers building AI-powered cat doors on rasperries?
1 Like