Problem:
When I run eg. kubectl get nodes I get error: error: error loading config file "/home/lab/.kube/config": yaml: line 16: did not find expected key
When I remove line 16 current-context: “kubernetes-admin@some1” I get another error:
error: error loading config file "/home/gitlab-runner/.kube/config": error converting *[]NamedAuthInfo into *map[string]*api.AuthInfo: duplicate name "kubernetes-admin" in list: [{kubernetes-admin { [45 45
... very long output of numbers... 45 45 10] [] map[] <nil> <nil> []}}]
I guess this is probably both clusters use: user: kubernetes-admin
This is nothing I can change.
$ kubectl get node
E0521 12:53:29.547657 321565 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E0521 12:53:29.548296 321565 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E0521 12:53:29.549911 321565 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E0521 12:53:29.550502 321565 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E0521 12:53:29.552173 321565 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
It seems that .kube/config require to have one “current-context key” (although It was there in my opening post somehow it didnt work).
Here is final version:
By default Im getting resources of cluster 2 (kubernetes-admin@some2).
kubectl config use-context kubernetes-admin@some1 gives access to resources of cluster 1 (kubernetes-admin@some1). kubectl config use-context kubernetes-admin@some2 gives access back to resources of cluster 2 (kubernetes-admin@some2).