We’re thinking about moving our Docker-container-based solution to a Kubernetes cluster. However, we’re unsure how to bring our version-to-version update mechanism with migration to Kubernetes. We’re not ready (nor willing) to switch to the preferred Deployment with RollingUpdate mechanism.
Our deployment with migration currently works as follows (and thereby minimizes our downtime to an absolute minimum, so that the load balancer can stall requests and users don’t even notice a downtime):
- Start from: ApplicationV1 -> DatabaseV1 (contains Events & ViewModel tables)
- Create a second Database “DatabaseV2”
- Transfer most of the Events from DatabaseV1 to DatabaseV2 and build up new ViewModels (incompatible schema with previous version) from those events
- Up until here the ApplicationV1 is still running and writes new events to DatabaseV1
- Now comes the downtime: Stop ApplicationV1
- Transfer the remaining (recently added) Events
- Start ApplicationV2
What would be a good way to turn this into a Kubernetes Deployment? We obviously cannot use the RollingUpdate mechanism as we don’t have a backwards compatible database (and we also don’t want to spend those maintainability costs for an application that’s not needing it)
- Is there a way to support our deployment with out-of-the-box Kubernetes mechanisms?
- Is there a way to write your own Deployment type (like “RollingUpdates” or “Recreate”)?
- Is it better to not use Deployments but to perform the steps somehow manually? If yes, what kind of Kubernetes concepts should we use? A Job, then taking the ApplicationV1 offline, then another Job, then starting up ApplicationV2?
Would gladly hear your professional opinion!
Best regards,
D.R.