Advice need for migrating app from on premise to kubernetes

Hi all,
I’m kubernetes newbie, I started to learn about K8S and I have read many docs from AKS and GKE recently , I have a lot of questions , I look forward to hearing from your experience .
Let’s say my application consists of :

  1. Web service : nginx
  • Some custom configuration in /etc/nginx/nginx.conf
  • Virtual host configs at /etc/nginx/conf.d : websiteA.conf , websiteB.conf
  • Source code at /var/www/html/websiteA , /var/www/html/websiteA
    How can I migrate it to K8S ?
    1.1 Package nginx + /var/www/html/websiteA, /var/www/html/websiteA to images ?
    Deploy images to K8S , ci/cd will rebuild image every time new release ?
    1.2 Deploy a nginx deployment , mount a PVC to /var/www/html , sync websiteA and websiteB source code to volume somehow ?
    How can I customize nginx config deployment ?
    1.2.1 create configmap from my onpremise /etc/nginx/* and apply to nginx deployment ?
    1.2.2 mount a PVC to /etc/nginx and sync config to volume somehow ?
  1. php-fpm service, I used 7.2 and 7.4 . Do I need to mount /var/www volume to php-fpm deployments ?

  2. MySQL : I know I should use cloud database products but has anyone deploy MySQL containers for production ? Let me know your experience :

  • Mount PVC to /var/lib/mysql , sync data from on premise to volume somehow
  • Customize MySQL config with configmap
  • How to backup/restore data ? I think I need to include mysql-client and crond image in deployment.

For the configmap, you can mount it.

Regarding storage… in a perfect world, you would have a storageclass that could just do readwritemany right out the gate. I have been underwhelmed in this particular area though. Knowledge-wise here is where I’m at. These things are worth knowing about and checking out, despite anything negative I’m about to say about them.

  • Using NFS just leaves you relying on a a single point of failure.
  • rook-ceph would be wonderful if it didn’t have the overhead of you managing it.
  • has proven to me time and time again that it’s not reliable when nodes die in testing. Also it’s readwritemany solution is just NFS.
  • Perhaps I’m wrong, but the glusterfs operator with heketi seems abandoned.

To workaround these storage woes, I have banned the use of PV’s and PVCs in my labs. Writing files to object storage (like S3 buckets) is the way to go for anything that gets uploaded. My containers include my application code in them.

For CI/CD I like Argo. It has multiple components, I was actually confused at first why people were recommending a CD tool without CI, until I just happened to look at all the Argo repositories on Github.

There’s also Tekton, but it’s just CI. There’s Flux and Jenkins-X, but they are highly opinionated tools that don’t work everywhere (this makes them lack portability which is is bad imo).

As for MySQL… maybe check out the TiDB operator or Vitess.

1 Like

Thank you very much protosam,
So configmap , I can create configmap from /etc/nginx/* and mount it to nginx deployment volume by PVC, mounted ConfigMaps are updated automatically , that’s great.
I’m using gitlab on premise for source version control/ci/cd , actually rsync code from the repository , so I care about customizing nginx virtual host with configmap.
Including code in container image is convenient to deploy because you don’t need to care about nginx config and it’s more like microservices , right ?

Deploying k8s on premise/bare metal is a real challenge regarding storage , I heard about and glusterfs operator with heketi during my research , thanks for your update
I’m looking to try aks trial to get used to k8s.

About MySQL , what do you think between :

  • Setup MySQL cluster by using solutions like TiDB operator or Vitess, Horizontal Scaling , increasing the replica count of pods
  • Setup MySQL standalone , Vertical Scaling , modify the resources (like CPU or RAM) of pod

I don’t use MySQL, so I’m not really sure about Vitess, because the amount of work to use it requires more time that I wanted to commit.

However I did test the TiDB operator because it was really simple. TiDB used about 500mb of RAM per replica without doing anything. Which was unacceptable to me, because I like to run lean. Might be fine for real workloads though.

My best advise is to test things out and see how it goes. You can check resource usage with kubectl top nodes and kuectl top pods -A

1 Like

Just following up on this. I’ve been on a bit of a StatefulSets binge lately and I wrote a manifest that deploys a Maria DB 10.5 cluster in a multi-master configuration that might interest you.

To be upfront though, I haven’t tested this for reliability yet. I’ve done some very basic testing and it seems to recover fine.