cp: cannot stat â/etc/kubernetes/admin.confâ: No such file or directory
and i have check below, i could see it still running.
kubectl cluster-info
Kubernetes master is running at http://localhost:8080
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
but when i use curl localhost:8080 it is throwing below error.
curl localhost:8080
curl: (7) Failed connect to localhost:8080; Connection refused
user@sqa02:/etc/kubernetes$ mkdir -p $HOME/.kube
user@sqa02:/etc/kubernetes$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: cannot stat '/etc/kubernetes/admin.conf': No such file or directory
root@sqa03:~# kubectl cluster-info
Kubernetes master is running at http://localhost:8080
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
root@sqa03:~# curl localhost:8080
curl: (7) Failed to connect to localhost port 8080: Connection refused
To reproduce, follow the instructions provided on this techrepublic article.
Getting kubectl to run really depends on how you installed it. Basically, if you install and have a proper config file, it should always work. So, either an old file from a previous installation is there or something silly like that (although usually difficult to spot).
Also, make sure the commands donât fail (some on the post pasted that the step to copy the kubectl config failed). That is the way to authorize to the cluster, so it wonât never work if that step doesnât work
If I were you, Iâd try removing everything from a previous run and starting from scratch and making sure nothing fails. If it does fail, try to fix that instead of continuing with the next steps. And if you canât fix it, please report back with the steps you did, why it failed (the error) and what you tried and didnât work.
Im beggining my studies in Kubernets, follow the tutorial (Install and Set Up kubectl - Kubernetes) and when type âkubectl cluster-infoâ I receive the message âTo further debug and diagnose cluster problems, use âkubectl cluster-info dumpâ.
The connection to the server localhost:8080 was refused - did you specify the right host or port?â.
When I run kubectl cluster-info dump receive the message "
The connection to the server localhost:8080 was refused - did you specify the right host or port?"
The documentation I found tells to search the file admin.conf at folder /etc/kubernets, but when I do it I didnât find the folder.
What can I do?
I have currently started working on this as well and I am running into the same brickwall.
I am running two nodes, one master and the other host.
Both are VMs running CentOS in Oracle Virtual Manager.
I got docker installed to the host and kubectl installed to master.
I can ssh from master to host and visa-verse. I receive ping replies as well. But cannot telnet into any of the machines from my physical Windows machine.
When it comes to âkubectl get nodesâ I receive the error: The connection to the server x.x.x.x:6443 was refused - did you specify the right host or port?
~]$ kubectl config view
apiVersion: v1
clusters:
cluster:
certificate-authority-data: DATA+OMITTED
server: https://x.x.x.x:6443
name: local
contexts:
context:
cluster: local
user: kube-admin-local
name: local
current-context: local
kind: Config
preferences: {}
users:
Iâve got this output:
(base) jiribenes:~$ sudo kubeadm init
[sudo] password for jiribenes:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using âkubeadm config images pullâ
[certs] Using certificateDir folder â/etc/kubernetes/pkiâ
[certs] Generating âcaâ certificate and key
[certs] Generating âapiserverâ certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local rurhelena1920] and IPs [10.96.0.1 172.30.147.238]
[certs] Generating âapiserver-kubelet-clientâ certificate and key
[certs] Generating âfront-proxy-caâ certificate and key
[certs] Generating âfront-proxy-clientâ certificate and key
[certs] Generating âetcd/caâ certificate and key
[certs] Generating âetcd/serverâ certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost rurhelena1920] and IPs [172.30.147.238 127.0.0.1 ::1]
[certs] Generating âetcd/peerâ certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost rurhelena1920] and IPs [172.30.147.238 127.0.0.1 ::1]
[certs] Generating âetcd/healthcheck-clientâ certificate and key
[certs] Generating âapiserver-etcd-clientâ certificate and key
[certs] Generating âsaâ key and public key
[kubeconfig] Using kubeconfig folder â/etc/kubernetesâ
[kubeconfig] Writing âadmin.confâ kubeconfig file
[kubeconfig] Writing âkubelet.confâ kubeconfig file
[kubeconfig] Writing âcontroller-manager.confâ kubeconfig file
[kubeconfig] Writing âscheduler.confâ kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file â/var/lib/kubelet/kubeadm-flags.envâ
[kubelet-start] Writing kubelet configuration to file â/var/lib/kubelet/config.yamlâ
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder â/etc/kubernetes/manifestsâ
[control-plane] Creating static Pod manifest for âkube-apiserverâ
[control-plane] Creating static Pod manifest for âkube-controller-managerâ
[control-plane] Creating static Pod manifest for âkube-schedulerâ
[etcd] Creating static Pod manifest for local etcd in â/etc/kubernetes/manifestsâ
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory â/etc/kubernetes/manifestsâ. This can take up to 4m0s
[apiclient] All control plane components are healthy after 18.503678 seconds
[upload-config] Storing the configuration used in ConfigMap âkubeadm-configâ in the âkube-systemâ Namespace
[kubelet] Creating a ConfigMap âkubelet-config-1.20â in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node rurhelena1920 as control-plane by adding the labels ânode-role.kubernetes.io/master=âââ and ânode-role.kubernetes.io/control-plane=ââ (deprecated)â
[mark-control-plane] Marking the node rurhelena1920 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: r1cctv.zgkk2aore4luh7wo
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the âcluster-infoâ ConfigMap in the âkube-publicâ namespace
[kubelet-finalize] Updating â/etc/kubernetes/kubelet.confâ to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run âkubectl apply -f [podnetwork].yamlâ with one of the options listed at: Installing Addons | Kubernetes
Then you can join any number of worker nodes by running the following on each as root: