I have an application which runs on a single node K8s cluster available with Docker Desktop. Now I want to move that application to an EKS cluster of 4 members to be used for Business purpose further.
The application has Deployments, Statefulsets, Service, ConfigMaps
Can you guide me steps on how should we do this, detailed steps will be beneficial.
@Sagar_Talreja
If your application already runs on a single-node Kubernetes cluster (Docker Desktop) and you want to move it to an Amazon EKS cluster with multiple nodes, the migration is mostly about making the manifests production-ready and cloud-compatible.
High-level steps:
-
Audit existing resources
Export your Deployments, StatefulSets, Services, and ConfigMaps.
Check for local-only assumptions like hostPath volumes or hard-coded IPs.
-
Prepare container images
Make sure all images are accessible from EKS (public registry or Amazon ECR).
Update image references in your manifests.
-
Fix storage for StatefulSets
Docker Desktop often uses hostPath, which won’t work on EKS.
Replace it with PersistentVolumeClaims backed by EBS using the AWS EBS CSI driver.
-
Add resource requests and limits
This is critical in a multi-node, production cluster for proper scheduling and stability.
-
Create the EKS cluster
Use eksctl to create a cluster with 4 worker nodes and configure kubectl access.
-
Install required add-ons: AWS EBS CSI driver (for persistent storage) and AWS Load Balancer Controller (if exposing services externally)
-
Deploy in the right order
Namespace → ConfigMaps → Services → Deployments → StatefulSets.
-
Validate and test
Check pod scheduling across nodes, verify PVC binding, test service access, and simulate pod failures to ensure resilience.
-
Production hardening
Move sensitive configs to Secrets, enable monitoring/logging, and apply proper IAM permissions.