I’m new to kubernetes and AWS and tried to set up a cluster there. Now I have a problem with deleting kubernetes cluster on AWS EC2 cluster. The cluster was set up using kOps.
Then I deleted the EC2 instance by accident from which cluster was created. First I tried to delete EC2 nodes and master, but then recalled that Kubernetes keeps them alive and keeps re-creating each time the instances are terminated. I tried to connect to a master node and installed kOps there, but I couldn’t delete cluster from there either. I used a command kops delete cluster --name=*name* --state=s3://*bucket-name* to delete a cluster, but no luck.
So I thought that if I delete an S3 bucket it would solve my problem, but it only got bigger, now there is no state at all. I sent a message to AWS Support, but they replied only with common instructions to delete EC2 instances (like delete load-balancer, services, etc.) So now I cannot connect to a cluster, because credentials are lost, and just don’t know what to do in this situation.
Could you please help me?
Cluster information:
Kubernetes version:
Cloud being used: AWS
Installation method: kOps
Host OS: Windows
Yup, it’s a training cluster. I tried to delete VPCs and Instances, but instances keep re-creating and VPCs are attached to it, but when I want to detach it, it gives me this error:
The network interface at device index 0 and networkCard index 0 cannot be detached.
If the instances keep recreating without a control plane, probably kops deployed auto scaling groups. Check that section in the EC2 console and delete them, that should terminate all the instances once and for all. After that, remove the rest of the resources.