How to initialize kubernetes cluster

Hi I am trying to initialize kubernetes cluster in my ec2 instance(master node) using ansible script. I am getting below error 3 times,

fatal: [3.15.220.116]: FAILED! => {“changed”: true, “cmd”: “kubeadm init --control-plane-endpoint :6443 --upload-certs --pod-network-cidr=192.168.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem --ignore-preflight-errors=FileContent–proc-sys-net-bridge-bridge-nf-call-iptables”, “delta”: “0:00:00.535499”, “end”: “2022-05-05 12:13:22.182358”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2022-05-05 12:13:21.646859”, “stderr”: “\t[WARNING NumCPU]: the number of available CPUs 1 is less than the required 2\n\t[WARNING Mem]: the system RAM (964 MB) is less than the minimum 1700 MB\nerror execution phase preflight: [preflight] Some fatal errors occurred:\n\t[ERROR Port-6443]: Port 6443 is in use\n\t[ERROR Port-10259]: Port 10259 is in use\n\t[ERROR Port-10257]: Port 10257 is in use\n\t[ERROR FileAvailable–etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists\n\t[ERROR FileAvailable–etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists\n\t[ERROR FileAvailable–etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists\n\t[ERROR FileAvailable–etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists\n\t[ERROR Port-10250]: Port 10250 is in use\n\t[ERROR Port-2379]: Port 2379 is in use\n\t[ERROR Port-2380]: Port 2380 is in use\n\t[ERROR DirAvailable–var-lib-etcd]: /var/lib/etcd is not empty\n[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...\nTo see the stack trace of this error execute with --v=5 or higher”, “stderr_lines”: ["\t[WARNING NumCPU]: the number of available CPUs 1 is less than the required 2", “\t[WARNING Mem]: the system RAM (964 MB) is less than the minimum 1700 MB”, “error execution phase preflight: [preflight] Some fatal errors occurred:”, “\t[ERROR Port-6443]: Port 6443 is in use”, “\t[ERROR Port-10259]: Port 10259 is in use”, “\t[ERROR Port-10257]: Port 10257 is in use”, “\t[ERROR FileAvailable–etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists”, “\t[ERROR FileAvailable–etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists”, “\t[ERROR FileAvailable–etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists”, “\t[ERROR FileAvailable–etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists”, “\t[ERROR Port-10250]: Port 10250 is in use”, “\t[ERROR Port-2379]: Port 2379 is in use”, “\t[ERROR Port-2380]: Port 2380 is in use”, “\t[ERROR DirAvailable–var-lib-etcd]: /var/lib/etcd is not empty”, “[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...“, “To see the stack trace of this error execute with --v=5 or higher”], “stdout”: “[init] Using Kubernetes version: v1.24.0\n[preflight] Running pre-flight checks”, “stdout_lines”: [”[init] Using Kubernetes version: v1.24.0”, “[preflight] Running pre-flight checks”]}

I used ansible script as below

  • name: Initializing Kubernetes cluster
    shell: kubeadm init --control-plane-endpoint :6443 --upload-certs --pod-network-cidr={{ cidr }} --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem --ignore-preflight-errors=FileContent–proc-sys-net-bridge-bridge-nf-call-iptables
    register: output
    changed_when: true

  • name: “Creating .kube directory”
    file:
    path: $HOME/.kube
    state: directory

  • name: “copy admin file”
    copy:
    remote_src: yes
    src: /etc/kubernetes/admin.conf
    dest: $HOME/.kube/config

  • name: “Change owner of .kube/config”
    shell: “sudo chown $(id -u):$(id -g) $HOME/.kube/config”

I am using cidr: “192.168.0.0/16”

Can anyone plz suggest me about this issue?

hey @Vinay_Kumar
I think your instance resources are not sufficient for the k8s cluster.
Choose t2.medium

Check this information before going to install the k8s cluster on ec2 instances.