Clickhouse cluster in Kubernetes

I have deployed a ClickHouse cluster using an operator with 3 shards and 3 replicas. It is up and running. However, when I perform schema creation, at some point it throws the following error:

Example:
Error occurred during SQL script execution
Reason:
SQL Error [573] [07000]: Code: 573. DB::ErrnoException: Cannot open epoll descriptor: , errno: 24, strerror: Too many open files. (EPOLL_ERROR) (version 24.4.4.105 (official build))
, server ClickHouseNode [uri=http://10.10.50.61:81/default, options={socket_timeout=75000,use_server_time_zone=false,use_time_zone=false}]@651464263
This is due to a “too many open files” error. I have tried using init containers and ConfigMap to run the script to set ulimit inside the pods. Unfortunately, these methods have not resolved the issue.

Can someone please help me resolve the “too many open files” issue?

The issue you’re facing is that your init container attempts to increase ulimit don’t affect the main ClickHouse container. Init containers run before the main container and any changes they make are not inherited.

The Correct Fix

Set the ulimit directly on the ClickHouse container using securityContext:

yaml

securityContext:
  ulimits:
    - name: nofile
      soft: 65536
      hard: 65536

If using the Altinity ClickHouse Operator, add this under the pod spec in your ClickHouseInstallation CR.

Additional Checks

  • Verify node‑level limits with sysctl fs.file-max (default ~1M is usually fine)

  • Set max_server_memory_usage in ClickHouse config if memory pressure is contributing

How EaseCloud Can Help

Running stateful workloads like ClickHouse on Kubernetes requires careful tuning. At EaseCloud , we specialize in exactly this:

  • Kubernetes Consulting – Deploy and optimize production databases on K8s

  • Observability & Monitoring – Catch resource bottlenecks before they cause failures