Thoughts on using kubernete's ConfigMaps as a datastore

So first, I just want to emphasize that I know the ConfigMaps are meant to store configuration, that this is abusing kubernetes’ functionality and I feel that this could backfire, but this was suggested in my team and just want to check with people who are actually experts in Kubernetes.

So basically for a new project we need a place to store application’s state and someone proposed that perhaps we could use ConfigMaps and use it in a way as if it was a NoSQL data store like DynamoDB. The benefit of it is obvious, that it is already available, no need to set up, or do any maintenance and the interface will be consistent on AWS, Azure, GCP.

I’m having a bit hangups about this idea, but I’m not kubernetes expert but things I can think of it:

  • it’s clearly not using it the way it was meant to be used
  • ConfigMaps are backed by etcd, which is important part for kubernetes, so large activity could cause slow down, and at worst kill the service
  • as I understand the data with this usage couldn’t be read from mounts (the mounts only change when application is restarted), so API endpoint would have to be used for reading writing. The application endpoint would also have to be exposed withing the application.

One argument is that there won’t be too much activity on it, but you never know as the requirements change and application change. Once store is being used barely most devs won’t think about storage limitations each time they make change.

Is this usage really a bad idea or am I overthinking it?

If this is a bad idea, are there other things to be wary in addition to what I wrote? I want to have something more convincing than “this was not designed to be used that way”.

  • it’s clearly not using it the way it was meant to be used

This is the least consideration IMO.

  • ConfigMaps are backed by etcd, which is important part for kubernetes, so large activity could cause slow down, and at worst kill the service

Let me say it more starkly. This could, in theory, bring down your whole cluster. The control-plane for ConfigMap is the same as the control-plane for Pods and Services. Realistically, it won’t crash the whole thing unless you are very abusive of this pattern.

The key word here is “pattern”. If you do this, other people in your company WILL see it and assume it is OK, and they may not have the same awareness of the tradeoffs that you do. Or the requirements might (will) evolve. What was 0.1 QPS in 2024 can easily become 5 QPS in 2025 and 50 QPS in 2026.

Will 50 QPS crash Kubernetes? I don’t THINK so but it’s not so one-dimensional. Load on apiserver is driven by QPS (writes in particular), number of watchers, and size of the resource (there are some hard limits at the 1MiB range, but perf drops as size increases).

  • as I understand the data with this usage couldn’t be read from mounts (the mounts only change when application is restarted), so API endpoint would have to be used for reading writing. The application endpoint would also have to be exposed withing the application.

Not sure what you mean - it should be readable via a mount, but at the end of the day that is still using an API operation to read the data periodically or to watch (I forget how it was implemented).

Is this usage really a bad idea or am I overthinking it?

IMO, it’s not an awesome precedent. Realisticially, I know that people do this, and our apiserver is pretty resilient.

I want to have something more convincing than “this was not designed to be used that way”.

Hopefully I gave you some arguments. :slight_smile:

I mean when you define the configurations, they are available to the conatiner until special mount location. Though, when you make changes to the ConfigMaps while application is running, that data under mount will be stale (as I understand) unless the container is restarted. Because of that, both reads and writes would have to be done by an API call.

I mean when you define the configurations, they are available to the conatiner until special mount location.

Yes, we have a “ConfigMap” volume type and a “Projected” one, which includes ConfigMap and more.

Though, when you make changes to the ConfigMaps while application is running, that data under mount will be stale

Actually, not true. We can (and will) live-update it, and you have basically no control over latency (between the API operation and the pod seeing updates) or how fast it rolls to your whole fleet. This makes in-place configmap updates sort of terrifying, unless you do something smart with it, like store two sets of data and have your apps choose which one to use in a sane manner. (It all seemed like a good idea at the time).

TL;DR: if you mount a ConfigMap as a volume AND you do an update to it, 100% of your workloads will be updated within a few seconds, and you have no control over it.