Management of 5,000 edge worker nodes

I would like to ask about building a large scale clustering.
We use kuberspray to build it.

kubernetes has a way to manage pods by using different namespaces and labels.
How can I manage a large number of worker nodes?

Is it better to use namespace?

Each node is on the same network using VPN from different regions.

Control Plane: cloud on-premise
Network: VPN
Number of Edge: almost 5,000

Kubernetes nodes are cluster-wide kubernetes objects, which means they are cluster scoped, they are not namespace scoped. They are not contained in a namespace.

You could use labels at the {.namespace.metadata.labels} field, this field accepts an array of key=value pairs. Both key and value are expected to be of string datatype.

Then use these labels to filter a subset of nodes from the many nodes.

kubectl explain pod.metadata.labels


KIND:     Pod

FIELD:    labels <map[string]string>

     Map of string keys and values that can be used to organize and categorize
     (scope and select) objects. May match selectors of replication controllers
     and services. More info:

See “Labels and Selectors” documentation “Labels and Selectors | Kubernetes” now how labels can be used to select a subset of nodes.

To assign labels to nodes can be done imperatively, which is helpful as it be easily executed by use of script.

The command can be used in imperatively assigning labels to a node.

kubectl label nodes <node-name> <label_01_name>=<label_01_value>

The command below can be used in imperatively removing a label from a node.

kubectl label nodes <node-name> <label-name>-

To view the labels assigned to your nodes you may use the command below.

kubectl get nodes --show-labels | grep some-label-name

But 5000 is a large number of worker nodes for one kubernetes cluster.
This quantity of nodes is at the upper limit of the recommended count of nodes in a large cluster according to the kubernetes official documentation see “Considerations for large clusters” “Considerations for large clusters | Kubernetes”.

Would it be better to segment the nodes by some attribute then construct several clusters, one cluster having one group of nodes. But no node should be shared between clusters.