Kubernetes Deployment Issues for My CapCut Resource Website

Hi everyone,

I’m running a CapCut-related resource website, hosting tutorials and templates, and have containerized the application for deployment on a Kubernetes cluster. However, I’m facing a few technical issues:

  1. Pod Scaling Problems: During high traffic periods, the Horizontal Pod Autoscaler (HPA) doesn’t scale the pods quickly enough, resulting in slow response times for users. I’ve configured the HPA to scale based on CPU usage, with a target of 50%, but scaling seems delayed.
  2. Persistent Storage Challenges: The website allows users to upload CapCut project templates, which are stored in a persistent volume. Occasionally, the uploads fail with errors like:

Unable to attach or mount volumes: timed out waiting for the condition

  1. Ingress Configuration Issues: I’m using an NGINX ingress controller for routing traffic, but some users report 404 errors when trying to access certain URLs. The paths work fine locally but fail intermittently on the live cluster.
  2. Resource Limits: Despite configuring resource requests and limits for the pods, I’m seeing OOMKilled errors in the logs. The application is built with Node.js, and I suspect there might be a memory leak, but I’m unsure how to confirm this in the Kubernetes environment.

Here’s a summary of my setup:

  • Kubernetes: v1.27
  • Cloud Provider: GKE (Google Kubernetes Engine)
  • Persistent Storage: GCE Persistent Disk
  • Ingress Controller: NGINX

I’ve tried increasing the CPU and memory requests for the pods and debugging with kubectl logs and kubectl describe, but I’m still struggling to resolve these issues.

Has anyone faced similar challenges with Kubernetes deployments for web applications? Any tips on fine-tuning autoscaling, debugging ingress issues, or optimizing resource usage would be greatly appreciated!

Thanks in advance for your help!

Is there anyone who can help me with this?