Wait for WebRTC connections to terminate during deployment rollout

Asking for help? Comment out what you need so we can get more information to help you!

Cluster information:

Kubernetes version: 1.7
Cloud being used: bare-metal
Installation method: kubectl
Host OS: Ubuntu

Hello,

My team runs a webrtc application that we are looking at migrating to kubernetes.

Our application consists of websocket servers for signaling, a WebRTC gateway for reading and forwarding WebRTC RTP traffic, and recorders that record the RTP traffic for later viewing.

Currently we manage the deployments/assignment of these these resources through source code so we can control the situation where we need to deploy new source code, but don’t want to shut down current running connections.

Our current flow is as follows:

  1. push new versions of our docker images and deploy up to our current pool count.
  2. Push all new traffic to the new docker images, stop all new traffic from going to the old images.
  3. Wait for connections/jobs on the old docker images to complete and then terminate them. This could take hours in the case of the recorder as we can’t make this stateless and switch the connections to a new machine.

While what we have works, we would like to be able to do this in a purely kubernetes way, however the only I have been able to find in this is a readinessProbe, which doesn’t seem to allow us to wait indefinately and close once the connections are terminated.

Does anyone know of a way that we would be able to accomplish this using kubernetes deployments?

Thanks in advance.

Not really that I know of with bare-kubernetes, but that sort of functionality can programmatically be done with some of the ingress controllers + service mesh solutions (istio, linkerd) - at least from a traffic routing solution.

Thanks for the response!

So that would take care of step 2 by sending traffic just to the new container version?

If that is the case, is there a way to de-register a running pod(or pods) from a replica-set so we can update the matchLabels to a new version id, but keep the old pods running? Maybe then we could just send a signal programmatically to the old ones to shut down when all connections have terminated.

Again, thanks for the input.