If it fails even with the root credential, it could probably be the issue of the downloaded binary file. Check it out the link below to reinstall the kubectl:
Using the right user fixed the issue for me. I set up the cluster using kops method in AWS.
I logged into the cluster with âubuntuâ user by the command ssh -i ~/.ssh/id_rsa ubuntu@api.demo.k8s.xyz.net
When i switched to root account, kubectl get nodes is not working. It is working only for ubuntu user
yes this is the exact solution to fix this issue. If you are setting environment for a user then we need to execute kubectl cmd with same user login but if you try to execute in other user or in root then it will throw this error.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf HOME/.kube/config
sudo chown (id -u):$(id -g) $HOME/.kube/config
run this commands it will resolve
It worked for me, too. All I have to do is use the same user (non-root) used to issue the command: âminikube start --vm-driver=âdockerââ
Resolved:
I was able to replicate the same error, and get it fixed using Hawaiideveloperâs solution
I was also suffering from issue âkubectl the connection to the server was refused 8080â
.
The solution for this issue is to use appropriate user to use kubectl. Donât use sudo. Using sudo will give error. And only user on which master node is created is only able to use kubectl.
It doesnât have to be on the control plane node. They just need their kubeconfig configured.
@mrbobbytables can you please suggest what needs to be done to setup contexts, i am getting simillar output which you suggest indicates No context configured
First you have to check wheather you are running the commands on master or workers
then you can try above suggessions
Hey, @Sankalp_Suryawanshi
First check available contexts, through
kubectl config get-contexts
If itâs there then you can use that context.
Go through this link, commands and examples are available.
Use without sudo
@Sankalp_Suryawanshi Please use without root or sudo ⊠it will work
Hi Man thanks for the tips ,It helped me take off the frustration.
This worked for me. Thanks for the help.
This one worked for me
it helped me a lot
This exact error was thrown from node, where i was supposed to create static pod there;
Based on lead by âelpâ scped whole .kube from controlnode to worker. Basically there was no .kube present in node01.
scp -r .kube node01:/root did teh trick for me.
Thanks all
the real port can be different
You most probably followed the these 4 lines:
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ export KUBECONFIG=/etc/kubernetes/admin.conf
-------- root cause:
$ export KUBECONFIG=/etc/kubernetes/admin.conf
the owner of /etc/kubernetes/admin.conf is root.
------- workaround:
$ export KUBECONFIG=$HOME/.kube/config