Merge 2 ~/.kube/config into one

Cluster information:

Kubernetes version:
Host OS: SUSE Server 15 SP4
CNI and version: calico
CRI and version: containerd://1.7.3

Description

Im trying to merge ~/.kube/config_1 and ~/.kube/config_2 into one ~/.kube/config_2 and this is how it looks now:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ###
    server: https://k8s.some1.net:443
  name: some1
- cluster:
    certificate-authority-data: ###
    server: https://k8s.some2.net:443
  name: some2
contexts:
- context:
    cluster: some2
    user: kubernetes-admin
  name: kubernetes-admin@some1
current-context: "kubernetes-admin@some1"
- context:
    cluster: some2
    user: kubernetes-admin
  name: kubernetes-admin@some2
current-context: "kubernetes-admin@some2"
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: ###
    client-key-data: ###
- name: kubernetes-admin
  user:
    client-certificate-data: ###
    client-key-data: ###

Problem:
When I run eg. kubectl get nodes I get error:
error: error loading config file "/home/lab/.kube/config": yaml: line 16: did not find expected key

When I remove line 16 current-context: “kubernetes-admin@some1” I get another error:

error: error loading config file "/home/gitlab-runner/.kube/config": error converting *[]NamedAuthInfo into *map[string]*api.AuthInfo: duplicate name "kubernetes-admin" in list: [{kubernetes-admin { [45 45 
... very long output of numbers... 45 45 10]     [] map[]   <nil> <nil> []}}]

I guess this is probably both clusters use: user: kubernetes-admin
This is nothing I can change.

Question:
How to overcome this problem?

UPDATE

It turns out that the “user” is irrelevant and can be anything. So I modified config like so:


apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ###
    server: https://k8s.some1.net:443
  name: some1
- cluster:
    certificate-authority-data: ###
    server: https://k8s.some2.net:443
  name: some2
contexts:
- context:
    cluster: some2
    user: kubernetes-admin-1
  name: kubernetes-admin@some1
- context:
    cluster: some2
    user: kubernetes-admin-2
  name: kubernetes-admin@some2
kind: Config
preferences: {}
users:
- name: kubernetes-admin-1
  user:
    client-certificate-data: ###
    client-key-data: ###
- name: kubernetes-admin-2
  user:
    client-certificate-data: ###
    client-key-data: ###

But now I have a new error:

$ kubectl get node
E0521 12:53:29.547657  321565 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E0521 12:53:29.548296  321565 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E0521 12:53:29.549911  321565 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E0521 12:53:29.550502  321565 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E0521 12:53:29.552173  321565 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?

It seems that .kube/config require to have one “current-context key” (although It was there in my opening post somehow it didnt work).
Here is final version:

apiVersion: v1
kind: Config
clusters:
- cluster:
    certificate-authority-data: ###
    server: https://k8s.some1.net:443
  name: some1
- cluster:
    certificate-authority-data: ###
    server: https://k8s.some2.net:443
  name: some2
contexts:
- context:
    cluster: some2
    user: kubernetes-admin-1
  name: kubernetes-admin@some1
- context:
    cluster: some2
    user: kubernetes-admin-2
  name: kubernetes-admin@some2
current-context: "kubernetes-admin@some2" <--- HERE
users:
- name: kubernetes-admin-1
  user:
    client-certificate-data: ###
    client-key-data: ###
- name: kubernetes-admin-2
  user:
    client-certificate-data: ###
    client-key-data: ###

By default Im getting resources of cluster 2 (kubernetes-admin@some2).

kubectl config use-context kubernetes-admin@some1 gives access to resources of cluster 1 (kubernetes-admin@some1).
kubectl config use-context kubernetes-admin@some2 gives access back to resources of cluster 2 (kubernetes-admin@some2).