DB cluster vs containerized MySQL

Hi, Learning the best practices here, I’ve researched a lot the topic of Kubernetes and databases and the latest additions to persistence, however I am undecided on how to tackle down databases. So a few months ago the only possibility for production environments was to have an independent database cluster but it seems nowadays this is not the case anymore… My use case is centered in serving a PHP / MySQL APP.
In your experience, should I build my K8S setup based on a MySQL cluster or use containerized DBs?
If anyone knows any good resources that can help me make this decision, please share! TY!!!

Hey, I wrote this about running PostgreSQL on Kubernetes, but it applies equally to MySQL.

Implementation is different, of course, but the tradeoffs are the same.


1 Like

Wonderful, thank you for sharing!

Can you help me understand architecture/topology for the newer “Kubernetes Native” Database management impementations?

Unless I did get the picture wrong, there are 3 ways to run a DB on K8S:

  1. One DB engine in a Pod placed in one given and known node and then accessed via network, much a la RDS style. A couple slaves help with reads and are waiting in case of master failure and the K8SDB “driver” takes care of routing traffic (or passes a new endpoint to the APP) to the service and knows which DB instance is in charge in a given moment. So basically a classical DB cluster but inside K8S and with the most basic K8S intelligence added to it…

  2. A Multimaster DB cluster is spread to every node and accessed locally (local virtual network) unless one DB Pod fails, then the apps in that node are redirected to the DB pod on another node until the Pod is resurrected.

  3. A Database is sharded across all nodes and each APP Pod is redirected to the proper node and DBPod containing the data for that particular APP. I guess you can then prioritize some APPs to be preferentially run in certain Pods to keep each APP in the same nodes as where their DB shard is.

Case #1 requires 3 DB engine Pods, maybe a bigger master and 2 smaller slaves. Not very costly in storage, not very costly in RAM and quite low CPU consumption, Slow access to the DB from the APPs as most of the connection would be through the network.

Case #2 requires as many DBPods as nodes on the cluster. So let’s say we have 5 worker nodes hence 5 copies of all the DBs that each APP in the cluster require. Not very good storage economy but faster reads and writes, lots of network usage on the background to keep the DB cluster in sync. Small probability of messing up writes if too many copies of an app are accessing different masters at the same time?

Case #3 Very good read and write speeds, quite good storage economy provided you are able to match the nodes the APPs run on with the nodes their shard is on (is this even possible?)

Providing my examples are mostly an accurate representation of how it actually works, For cases 1 and 2 all DB pods would contain all the databases that are required by any of the APPs in the cluster, while the sharding approach would require each database to be writable at one shard and I guess copied to 2 other shards for HA purposes.

In your article you mention the network overheat bigger databases can suffer from, you have probably noticed I am a performance freak, even when it’s not required, I like to keep my DBs as close to the APPs as I can, hence my worries about where the DB is in relation to the APPs. Am I worrying unnecessarily or is this a valid concern? (I work with a PHP app connecting to MySQL (usually MariaDB or Percona)

Average DB size is 3.5 GB, but there are a couple APPs that are tremendously DB write intensive (user tracking)

Questions:

  • Did I get the existing architectures right?
  • Are there any other architectures I am not aware of?
  • Which of the architectures you think are better for my case?

As an expert in both Databases and K8S, do you think it would be a good idea to use some lightweight DB engine (in-memory maybe or maybe a cache container within the same Pod) that can be run within each of the APPs Pods and loaded with only the database that APP uses (instead of all the databases in the cluster) could be a good approach?
Maybe with a regular Type #1 cluster as backend?
So each APP would have the data it needs within the very pod it resides in?

Thank you in advance for reading! (and hopefully for answering back :slight_smile: )

1 Like

As I say in my blog post, if you’re looking to optimize DB+HW performance, then hosting on Kubernetes is really not for you.

Yes, you could easily create a pod which was App container + DB backend container, a “macroservice” as it were. Then the app and the DB would stay together. However, that means that you’ve sacrificed all ability to scale the app by replicating it over your infrastructure; you will always have 1 App to 1 DB, and if the App is your performance bottleneck (as it usually is), then oh well.

This doesn’t get better with MM databases, because they still have to communicate over the Kubernetes virtual network and use its discovery services to locate the other nodes. So putting individual nodes of a full sync MM database in each app pod would help with locality of reads, but writes would still be higher-latency than bare metal. And upgrades would be awkward.

Most people aren’t comparing bare-metal performance, though. They’re comparing against RDS, which is already a network hop or two away, and already depends on discovery services. There the performance is quite comparable.

Your idea of having a sharded database with horizontal sharding tied to each individual app instance and effectively local writes (because you’re always accessing the local shard) is one of those ideas that many people go through when trying to scale their MySQL-based application, either on Kubernetes or off it. It has the same limitation mentioned above, that is, a 1:1 mapping befween PHP app and MySQL node, and thus puts you in the problematic position of having to shard the database more in order to get more cores running PHP. Further, the sharding would need to be managed by the application, rather than being able to use any generic system, which means you need to become a database engineer to make it work.

1 Like

I understand that absolute best performance is achieved on baremetal with a non containerized setup, of course, I am just trying to figure out a way to be as near as that performance as possible while taking advantage of the many benefits (or at least some of those) that K8S brings.

I didn’t mean one APP+DB in the very same pod, but in 2 pods in the same node. Sorry if I didn’t make that clear.

You can scale the app horizontally as soon as you can “scale” (shard) the DB horizontally too, right?

I’m now trying to understand if replication at the storage level is something that would work as expected on the database level.

@jberkus If I replicate the block storage that one DB is using to sync with another DB, what sorts of problems will I encounter? Is that even possible?

Replicating block storage is very possible, it’s Portworx’s entire business line.

However, it’s also the highest-overhead, highest-latency way to replicate a database. First, block storage has to replicate every bit that changes on disk, including things like disk heap sort space which the database regards as ephemeral (not sure about MySQL, but in Postgres this can be 50-70% of all write activity for some databases). Second, on failover the database must go through crash recovery when you bring the new pod/container up – a “clean failover”, even when you have a planned outage, isn’t possible. At best, crash recovery takes seconds to minutes to complete. At worst, you discover a new crash recovery bug – since it’s difficult for database engineers to exhaustively test all crash recovery paths – and your database is corrupted.

Aside from the above, it should work fine :wink:

Most db/web apps have an optimal ratio of application server to DB server of between 3 and 12 to one, depending on the language, framework, and application. It’s pretty rare for a 1:1 ratio to make sense, except maybe in pure analytics work. Yes, you can run multiple PHP pods on the same machine as a DB shard … but those are just sharing the same hardware resources, so you’re not getting extra performance out of it.

Aside from the above, it should work fine :wink:
lol

Thank you @jberkus

So if I’m understanding it right, for most stateful apps and certainly for a regular PHP APP (such as WP, SugarCRM, etc) unless you want to sacrifice a lot on both sides APP and DB, it is still not a good idea yet (except for very specific cases) to unify your APP and your DB within the same Kubernetes cluster.

Let me try to sum it all up:

  • Having a separated, non K8S DB cluster (classical DB setup with 1 master and 2 slaves) will execute queries substantially faster, specially if you can tune the amount of memory and the type of drives in use and probably be much simpler to manage at scale.

  • The only reasonable way to have DB and APP on the same node all within the same K8S cluster would be using replication of the storage layer, which is a new technology and is very hard on the network and possibly hard on the drives too.

  • In a unified APP + DB K8S cluster, you need to run your APP on the same node as the master DB is, hence limiting a certain instance of an app to one single node, which in return makes using K8S kind of stupid. If you run your APP pods to where the slaves (replicas) of your DB are, writes would be happening over the network anyway, hence defeating the purpose of it all.

  • When replicating storage, you multiply the amount of storage space needed, which could mean huge SSD drives (expensive) or spinning HDDs (low performance for the DB writes).

  • The clustered Storage layer could potentially become a bottleneck or slow down the entire cluster because of the extra network communications needed to operate it (unless you can have a dedicated network interface and a separated network for the storage layer only).

  • Sharding or replicating a DB within Kubernetes share almost the same benefits/caveats, sharding would maybe have even more restrictive settings for the placement of APP instances in nodes.

Conclusion: The advantages of locally accessing the DB on a setup with one single K8S cluster (namely a few milliseconds on every query because of no network access required) would be rapidly be compensated by having a more optimizable, dedicated DB node or DB cluster outside Kubernetes.
Not to forget the added complexity of setting up and managing the storage layer.

Did I get it all right?

Yes. There’s also that, if you don’t have the DB in the same pod as the App, there’s no way to bypass Kubernetes overlay networking, even if the two pods happen to be on the same machine.

These links are not working anymore can anyone post it again !!!

Links should be fine. That blog is on Google Blogger; do you live somewhere that Google is blocked for you?