Kubectl execution in pod getting OOMKilled

Hi everyone, I am facing issues while running kubectl within pod/microservice while creating configmaps/secrets when the numbers are high.

As a part of CM/Secret creation, kubectl create command is executed with --dry-run=client to prepare a valid request which is then piped to kubectl apply -n "${NAMESPACE}" -f -.

The above works fine when CM/Secrets are less in number (15/20) but when it reaches above 80, the command starts to fail with error 137 i.e. gets OOMKilled.

By default, the CPU and memory limits are set to 200m and 256Mi respectively, but with huge number of CMs (lets say around 100), even 1 CPU and 1Gi limits do not help (limits increased only for testing, I can’t give such resources dedicated for this microservice due to resource constraint)

Have also tried setting GOMEMLIMIT but still the command gets OOMKilled.

Verbose Execution logs:

oom-test:/$ cat /sys/fs/cgroup/cpu/cpuacct.usage
9745888999
oom-test:/$ cat /sys/fs/cgroup/memory/memory.usage_in_bytes
8040448
oom-test:/$ echo $GOMEMLIMIT
7040448
oom-test:/$ cat /tmp/cm
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app: oom_test
    name: oom_test
    template: oom_test
  name: oom-test-comfigmap
data:
  v=v
  vv=vv
  vvv=vvv
  vvvv=vvvv
oom-test:/$ kubectl apply -f /tmp/cm -v=9
I0829 09:05:22.160016    1002 merged_client_builder.go:121] Using in-cluster configuration
I0829 09:05:22.160308    1002 merged_client_builder.go:121] Using in-cluster configuration
I0829 09:05:22.171129    1002 round_trippers.go:466] curl -v -XGET  -H "Accept: application/com.github.proto-openapi.spec.v2@v1.0+protobuf" -H "User-Agent: kubectl/v1.23.6 (linux/amd64) kubernetes/ad33385" -H "Authorization: Bearer <masked>" 'https://172.30.0.1:443/openapi/v2?timeout=32s'
I0829 09:05:22.179845    1002 round_trippers.go:510] HTTP Trace: Dial to tcp:172.30.0.1:443 succeed
I0829 09:05:22.248875    1002 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 1 ms TLSHandshake 11 ms ServerProcessing 8 ms Duration 77 ms
I0829 09:05:22.248941    1002 round_trippers.go:577] Response Headers:
I0829 09:05:22.248967    1002 round_trippers.go:580]     Vary: Accept-Encoding
I0829 09:05:22.248994    1002 round_trippers.go:580]     Vary: Accept
I0829 09:05:22.249074    1002 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4cxxxx8e-axx6-4xx7-8xx3-5f6xxxxxxx67
I0829 09:05:22.249184    1002 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 03xxxxde-axx5-4xxd-axxe-faxxxxxxxx19
I0829 09:05:22.249332    1002 round_trippers.go:580]     Accept-Ranges: bytes
I0829 09:05:22.249437    1002 round_trippers.go:580]     Audit-Id: bfxxxxdb-exx3-4xx3-8xx8-27xxxxxxxx0b
I0829 09:05:22.249774    1002 round_trippers.go:580]     Cache-Control: no-cache, private
I0829 09:05:22.250041    1002 round_trippers.go:580]     Content-Type: application/octet-stream
I0829 09:05:22.250205    1002 round_trippers.go:580]     Date: Tue, 29 Aug 2023 09:05:22 GMT
I0829 09:05:22.250229    1002 round_trippers.go:580]     Etag: "088xxxxxxxx1EC5E63F18F5CB6C9xxxxxxxx277E4DC460156789189DF0409208180B9205DB3D27636B47F2D339F87A7B5A801F8549BDFD638xxxxxxxx3779C16"
I0829 09:05:22.250344    1002 round_trippers.go:580]     Last-Modified: Mon, 28 Aug 2023 18:58:37 GMT
I0829 09:05:22.250370    1002 round_trippers.go:580]     X-Varied-Accept: application/com.github.proto-openapi.spec.v2@v1.0+protobuf
Killed
oom-test:/$ echo $?
137
oom-test:/$

Kindly suggest how can I get this to work within resource limits

Sorry to disturb in your busy schedule, but tagging people who seem to active on the community for some help

@mrbobbytables @thockin

Requesting other too to chime in if they have any solution to it

@feloy @macintoshprime @Yujong @srose

Requesting you to share inputs (if any)

Even with --timeout='60s' and GOMEMLIMIT, issue is still same, But kubectl get configmap works

oom-test:/$ echo $GOMEMLIMIT
576MiB
oom-test:/$
oom-test:/$ kubectl create configmap oom-test-configmap -n <namespace> --from-literal v=v --from-literal vv=vv --from-literal vvv=vvv --from-literal vvvv=vvvv -o yaml --dry-run=client | kubectl apply -n <namespace> --timeout='60s' -v=9 -f -
I0912 09:08:57.993314    1199 merged_client_builder.go:121] Using in-cluster configuration
I0912 09:08:58.003315    1199 merged_client_builder.go:121] Using in-cluster configuration
I0912 09:08:58.083545    1199 round_trippers.go:466] curl -v -XGET  -H "Accept: application/com.github.proto-openapi.spec.v2@v1.0+protobuf" -H "User-Agent: kubectl/v1.26.7 (linux/amd64) kubernetes/84e1fc4" -H "Authorization: Bearer <masked>" 'https://172.30.0.1:443/openapi/v2?timeout=32s'
I0912 09:08:58.090697    1199 round_trippers.go:510] HTTP Trace: Dial to tcp:172.30.0.1:443 succeed
I0912 09:08:58.113788    1199 round_trippers.go:553] GET https://172.30.0.1:443/openapi/v2?timeout=32s 200 OK in 30 milliseconds
I0912 09:08:58.113845    1199 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 1 ms TLSHandshake 9 ms ServerProcessing 8 ms Duration 30 ms
I0912 09:08:58.113865    1199 round_trippers.go:577] Response Headers:
I0912 09:08:58.113880    1199 round_trippers.go:580]     Cache-Control: no-cache, private
I0912 09:08:58.113897    1199 round_trippers.go:580]     Content-Type: application/octet-stream
I0912 09:08:58.113907    1199 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4cxxxx8e-axx6-4xx7-8xx3-5fxxxxxxxx67
I0912 09:08:58.113921    1199 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 03xxxxde-axx5-4xxd-axxe-faxxxxxxxx19
I0912 09:08:58.113936    1199 round_trippers.go:580]     Date: Tue, 12 Sep 2023 09:08:58 GMT
I0912 09:08:58.113948    1199 round_trippers.go:580]     Accept-Ranges: bytes
I0912 09:08:58.113961    1199 round_trippers.go:580]     Audit-Id: dfxxxxa8-9xx0-4xxa-axxa-07xxxxxxxxad
I0912 09:08:58.113974    1199 round_trippers.go:580]     Etag: "B36xxxxxxxx8110E9E3461FD41EE115CFD16CA5099127C7CF4779D744166xxxxxxxx0B4BF9918DD8AB6FE10BA7826E19A4413A25F51CD427xxxxxxxx2C46D93D"
I0912 09:08:58.113985    1199 round_trippers.go:580]     Last-Modified: Tue, 12 Sep 2023 08:25:40 GMT
I0912 09:08:58.113994    1199 round_trippers.go:580]     Vary: Accept-Encoding
I0912 09:08:58.114004    1199 round_trippers.go:580]     Vary: Accept
I0912 09:08:58.114013    1199 round_trippers.go:580]     X-Varied-Accept: application/com.github.proto-openapi.spec.v2@v1.0+protobuf
Killed
oom-test:/$
oom-test:/$ kubectl get configmap -n <namespace> --no-headers=true | wc -l
58
oom-test:/$

Kindly support with any info that you can share

When testing configmap creation via Local PC, kubectl takes a whooping 1.5GB RAM and 20% CPU util to create the configmap

Strange thing is, same kubectl version in a different env works perfectly fine, using hardly 124MB RAM from the microservice

I am using alpine-3.18 image with kubectl copied to its /usr/bin/.The image in other env takes 124MB RAM for the process to complete, while in actual env takes more than 1.5GB RAM to do the same operation (gets OOM Killed most of the time while exiting with Client.Timeout or context cancellation while reading body error sometimes)

Can you share what/how to debug such high RAM utilization for the said env ?