I would like to ask about the control of a kubernetes master on-prem in the cloud and management nodes under a different network

From an on-prem instance or managed service (AKS) kubernetes master in the cloud (azure)

Assume that there is a worker node in another network (e.g. office).

As an assumption, we would like to ask you about the most appropriate way to manage the thousands of worker nodes in the office.

I have several thousand worker nodes (each on a different machine) and would like to be able to restart or terminate the pods they hold at will (pod deletion).

I was thinking of using message queues, etc., but I think it would be hard to control if they fail.

I thought of the following as ideas.

Control via REST API (there is a pod that has an API attached to the worker node, and when it receives a message, it deletes the specific pod).

→ Simple way.

(For example, a method to delete a specific deployment per node is applied to each node).

(For example, deleting a specific deployment per node) → However, this seems to be difficult to operate on specific pods.

I would like to hear if there is another way to manage kubectl that is more convenient.

Thank you in advance.

| kokorononakaniitumo
July 15 |

  • | - |

From an on-prem instance or managed service (AKS) kubernetes master in the cloud (azure)

Assume that there is a worker node in another network (e.g. office).

Quick note: it’s generally discouraged to have nodes and control-plane “far away”. I can’t tell if that is what’s happening here.

As an assumption, we would like to ask you about the most appropriate way to manage the thousands of worker nodes in the office.

Are they literally kubernetes Nodes or are you using the term more generally?

I have several thousand worker nodes (each on a different machine) and would like to be able to restart or terminate the pods they hold at will (pod deletion).

I was thinking of using message queues, etc., but I think it would be hard to control if they fail.

If they are kubernetes Nodes, it sounds like you are describing ‘kubectl delete pod xyz’ ?

I thought of the following as ideas.

Control via REST API (there is a pod that has an API attached to the worker node, and when it receives a message, it deletes the specific pod).

That is almost literally kubelet. It watches the central API but otherwise pretty close.

→ Simple way.

(For example, a method to delete a specific deployment per node is applied to each node).

(For example, deleting a specific deployment per node) → However, this seems to be difficult to operate on specific pods.

If I understand you, you really want a Daemonset and node labels.

Thanks for your reply.

The following figure shows “azure”, “office”, and “office (person who have pc and control to cloud azure)”.

If I understand you, you really want a Daemonset and node labels.

Is it possible to control all nodes from the office master by using deamonset?

This is a pattern with MASTER placed in OFFICE.
But with deamonset, is it difficult to detect and deal with errors?

I don’t quite understand what you are trying to do. Why are there 2 masters?

Are you trying to remotely control all of your nodes? From a control plane that you can’t reach by network?

Looking at this I would guess you want some sort of git-ops model, where the on-prem cluster pulls config and applies it

I want to do like this.

for example, if I use SoC or etc IoT device.
actually, kubernetes master in office is not necessary.

Basically, if I have nodes on different networks, it is better to use a VPN.

I think using deamonset is good.

IMO - You may want to take look at options specifically designed for edge-computing like KubeEdge or SuperEdge. There are a lot of other gotchas and considerations when designing for a geographically distributed cluster.

Thank you for your reply.
I guess I need to understand the concept of control plane first.