Cluster information:
Kubernetes version: v1.27.10
Cloud being used: AWS
Installation method: kube-adm
Host OS: Amazon Linux 2 (centos rhel fedora)
CNI and version: calico (3.26.3)
CRI and version: containerd (1.7.2)
StorageClass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: gp2
parameters:
fsType: ext4
type: gp2
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
StatefulSet.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: xxxx
name: xxxx
namespace: xxxx
spec:
persistentVolumeClaimRetentionPolicy:
whenDeleted: Retain
whenScaled: Retain
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: xxxx
serviceName: xxxx-headless
template:
metadata:
labels:
app: xxxx
spec:
containers:
- command:
- xxxx
env:
- name: provider
value: aws
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: xxxx
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- xxxx
livenessProbe:
failureThreshold: 3
httpGet:
path: xxxx
port: xxxx
scheme: HTTPS
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: xxxx
ports:
- containerPort: 4222
protocol: TCP
- containerPort: 4224
protocol: TCP
- containerPort: 4226
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: xxxx
port: xxxx
scheme: HTTPS
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: "1"
memory: 1500Mi
requests:
cpu: "1"
memory: 1000Mi
startupProbe:
failureThreshold: 30
httpGet:
path: xxxx
port: 4224
scheme: HTTPS
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/nats/data/jetstream
name: nats-pvmount
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: ecr-credentials
nodeSelector:
service: worker
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nats-pvmount
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
volumeMode: Filesystem
I cross-checked in AWS console for the volume created AZ and the pod scheduled nodes AZ. Both the volume and pod scheduled nodes are in the same AZ, us-east-1a. But still, I’m facing the same issue - “volume node affinity conflict”.
Can someone please help me fix this error? Let me know if you need more info. Happy to share!
Thanks,
Naren