I’m trying to create and manage eks cluster using clusterAPI.
I have applied the yaml file with cluster name c1-eks.
This is the yaml file
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: c1-eks
namespace: default
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
name: c1-eks-control-plane
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSCluster
name: c1-eks
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSCluster
metadata:
name: c1-eks
namespace: default
spec:
region: us-east-1
sshKeyName: default
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
name: c1-eks-control-plane
namespace: default
spec:
kubeadmConfigSpec:
clusterConfiguration:
apiServer:
extraArgs:
cloud-provider: aws
controllerManager:
extraArgs:
cloud-provider: aws
initConfiguration:
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
name: '{{ ds.meta_data.local_hostname }}'
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
name: '{{ ds.meta_data.local_hostname }}'
machineTemplate:
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSMachineTemplate
name: c1-eks-control-plane
replicas: 3
version: v1.23.3
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSMachineTemplate
metadata:
name: c1-eks-control-plane
namespace: default
spec:
template:
spec:
iamInstanceProfile: control-plane.cluster-api-provider-aws.sigs.k8s.io
instanceType: t3.large
sshKeyName: default
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: c1-eks-md-0
namespace: default
spec:
clusterName: c1-eks
replicas: 3
selector:
matchLabels: null
template:
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: c1-eks-md-0
clusterName: c1-eks
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSMachineTemplate
name: c1-eks-md-0
version: v1.23.3
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSMachineTemplate
metadata:
name: c1-eks-md-0
namespace: default
spec:
template:
spec:
iamInstanceProfile: nodes.cluster-api-provider-aws.sigs.k8s.io
instanceType: t3.large
sshKeyName: default
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: c1-eks-md-0
namespace: default
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
name: '{{ ds.meta_data.local_hostname }}'
This is the result of kubectl get cluster
$ kubectl get cluster
NAME PHASE AGE VERSION
c1-eks Provisioned 34m
When I describe cluster with clusterctl it shows error and says InstanceProvisionFailed
clusterctl describe cluster c1-eks
NAME READY SEVERITY REASON SINCE MESSAGE
/c1-eks False Error InstanceProvisionFailed @ Machine/c1-eks-control-plane-w5rrd 29m 1 of 2 completed
├─ClusterInfrastructure - AWSCluster/c1-eks True 29m
├─ControlPlane - KubeadmControlPlane/c1-eks-control-plane False Error InstanceProvisionFailed @ Machine/c1-eks-control-plane-w5rrd 29m 1 of 2 completed
│ └─Machine/c1-eks-control-plane-w5rrd False Error InstanceProvisionFailed 29m 1 of 2 completed
└─Workers
└─MachineDeployment/c1-eks-md-0 False Warning WaitingForAvailableMachines 34m Minimum availability requires 3 replicas, current 0 available
└─3 Machines... False Info WaitingForBootstrapData 29m See c1-eks-md-0-66c5f554ff-96qd8, c1-eks-md-0-66c5f554ff-mvnxw, ...
This is the result of kubectl describe Machine/c1-eks-control-plane-w5rrd
$ kubectl describe Machine/c1-eks-control-plane-w5rrd
Name: c1-eks-control-plane-w5rrd
Namespace: default
Labels: cluster.x-k8s.io/cluster-name=c1-eks
cluster.x-k8s.io/control-plane=
Annotations: controlplane.cluster.x-k8s.io/kubeadm-cluster-configuration:
{"etcd":{},"networking":{},"apiServer":{"extraArgs":{"cloud-provider":"aws"}},"controllerManager":{"extraArgs":{"cloud-provider":"aws"}},"...
API Version: cluster.x-k8s.io/v1beta1
Kind: Machine
Metadata:
Creation Timestamp: 2022-02-14T07:59:00Z
Finalizers:
machine.cluster.x-k8s.io
Generation: 2
Managed Fields:
API Version: cluster.x-k8s.io/v1beta1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:controlplane.cluster.x-k8s.io/kubeadm-cluster-configuration:
f:finalizers:
.:
v:"machine.cluster.x-k8s.io":
f:labels:
.:
f:cluster.x-k8s.io/cluster-name:
f:cluster.x-k8s.io/control-plane:
f:ownerReferences:
.:
k:{"uid":"1b2843be-184d-405d-88b6-1694a1fff5c4"}:
.:
f:apiVersion:
f:blockOwnerDeletion:
f:controller:
f:kind:
f:name:
f:uid:
f:spec:
.:
f:bootstrap:
.:
f:configRef:
.:
f:apiVersion:
f:kind:
f:name:
f:namespace:
f:uid:
f:dataSecretName:
f:clusterName:
f:failureDomain:
f:infrastructureRef:
.:
f:apiVersion:
f:kind:
f:name:
f:namespace:
f:uid:
f:version:
f:status:
.:
f:bootstrapReady:
f:conditions:
f:lastUpdated:
f:observedGeneration:
f:phase:
Manager: manager
Operation: Update
Time: 2022-02-14T07:59:01Z
Owner References:
API Version: controlplane.cluster.x-k8s.io/v1beta1
Block Owner Deletion: true
Controller: true
Kind: KubeadmControlPlane
Name: c1-eks-control-plane
UID: 1b2843be-184d-405d-88b6-1694a1fff5c4
Resource Version: 31839
UID: 76bdc88e-8251-4031-b935-1b8a11b3412c
Spec:
Bootstrap:
Config Ref:
API Version: bootstrap.cluster.x-k8s.io/v1beta1
Kind: KubeadmConfig
Name: c1-eks-control-plane-wrjqd
Namespace: default
UID: 272eba72-943d-4ff6-973b-869b92677caf
Data Secret Name: c1-eks-control-plane-wrjqd
Cluster Name: c1-eks
Failure Domain: us-east-1b
Infrastructure Ref:
API Version: infrastructure.cluster.x-k8s.io/v1beta1
Kind: AWSMachine
Name: c1-eks-control-plane-zvdbx
Namespace: default
UID: 7c4bd14c-5931-4bd9-a1d5-ba4846b02c95
Version: v1.23.3
Status:
Bootstrap Ready: true
Conditions:
Last Transition Time: 2022-02-14T07:59:03Z
Message: 1 of 2 completed
Reason: InstanceProvisionFailed
Severity: Error
Status: False
Type: Ready
Last Transition Time: 2022-02-14T07:59:01Z
Status: True
Type: BootstrapReady
Last Transition Time: 2022-02-14T07:59:03Z
Message: 0 of 3 completed
Reason: InstanceProvisionFailed
Severity: Error
Status: False
Type: InfrastructureReady
Last Transition Time: 2022-02-14T07:59:01Z
Reason: WaitingForNodeRef
Severity: Info
Status: False
Type: NodeHealthy
Last Updated: 2022-02-14T07:59:01Z
Observed Generation: 2
Phase: Provisioning
Events: <none>
And kubeadmcontrolplane
is also not initialized. This doesn’t let to apply yhe CNI solution in the cluster.
kubectl get kubeadmcontrolplane
NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
c1-eks-control-plane c1-eks 1 1 1 27m v1.23.3
Please HELP!