Simultaneous kubectl config commands

Cluster information:

Kubernetes version: 1.14.10
Cloud being used: bare-metal
Installation method:
Host OS: darwin
CNI and version:
CRI and version:

I have some automation that is working with several different kube namespaces at once. My scripts are using non-default kubeconfig files to keep their work separate from each other. This simple command explains the issue I’m seeing (one command succeeds, but the other fails because of the lockfile):

> ~/bin/kubectl_1.14.10 --kubeconfig /Users/gr200/ktest/env1 config set-cluster default --server='https://myserver.mydomain.com' --insecure-skip-tls-verify=true &; ~/bin/kubectl_1.14.10 --kubeconfig /Users/gr200/ktest/env2 config set-cluster default --server='https://myserver.mydomain.com' --insecure-skip-tls-verify=true
[1] 6482
error: open /Users/gr200/.kube/config.lock: file exists
Cluster "default" set.
[1]  + 6482 done       ~/bin/kubectl_1.14.10 --kubeconfig /Users/gr200/ktest/env1 config  default

I’m running two different kubectl config commmands at the same time (note the ‘&’ in the chained commands). I figured if I used the --kubeconfig argument, I would make them completely independent, but that does not seem to be the case. If I remove the ‘&’, both commands succeed. Is there any way to get the two kubectl commands to not share the lockfile location in my home directory so that I can run them (and any number of them) in parallel?