First of all, please be patient with me, I’m new to Kubernetes.
I’m trying to setup a nexus3 instance as docker pull through cache for my homelab to get docker images faster as my internet is not so fast.
I set up a 3 node cluster using kubespray.
Then I used helm charts to deploy nexus3.
I deploy everything using ansible.
After my playbook finishes, the pod does not run because it cannot bind the storage:
Name: nexus3-nexus-repository-manager-77f5465f9f-2pdgj
Namespace: nexus3
Priority: 0
Node: <none>
Labels: app.kubernetes.io/instance=nexus3
app.kubernetes.io/name=nexus-repository-manager
pod-template-hash=77f5465f9f
Annotations: checksum/configmap-properties: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/nexus3-nexus-repository-manager-77f5465f9f
Containers:
nexus-repository-manager:
Image: sonatype/nexus3:3.29.2
Port: 8081/TCP
Host Port: 0/TCP
Liveness: http-get http://:8081/ delay=30s timeout=10s period=30s #success=1 #failure=6
Readiness: http-get http://:8081/ delay=30s timeout=10s period=30s #success=1 #failure=6
Environment:
install4jAddVmParams: -Xms1200M -Xmx1200M -XX:MaxDirectMemorySize=2G -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
NEXUS_SECURITY_RANDOMPASSWORD: true
Mounts:
/nexus-data from nexus-repository-manager-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from nexus3-nexus-repository-manager-token-v25cv (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
nexus-repository-manager-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: local-claim
ReadOnly: false
nexus3-nexus-repository-manager-token-v25cv:
Type: Secret (a volume populated by a Secret)
SecretName: nexus3-nexus-repository-manager-token-v25cv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4m57s default-scheduler 0/3 nodes are available: 3 node(s) didn't find available persistent volumes to bind.
Warning FailedScheduling 4m57s default-scheduler 0/3 nodes are available: 3 node(s) didn't find available persistent volumes to bind.
These are my deployments for storage
I tried changing the volume binding mode from WaitForConsumer to Immediate but then I get the error no volume plugin matched name: kubernetes.io/no-provisioner
Provisioner:
---
# Source: provisioner/templates/provisioner.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: local-provisioner-config
namespace: nexus3
data:
storageClassMap: |
local-storage:
hostDir: /dev/sdb
mountDir: /mnt/sdb
blockCleanerCommand:
- "/scripts/shred.sh"
- "2"
volumeMode: Filesystem
fsType: ext4
namePattern: "*"
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: local-volume-provisioner
namespace: nexus3
labels:
app: local-volume-provisioner
spec:
selector:
matchLabels:
app: local-volume-provisioner
template:
metadata:
labels:
app: local-volume-provisioner
spec:
serviceAccountName: local-storage-admin
containers:
- image: "quay.io/external_storage/local-volume-provisioner:v2.1.0"
imagePullPolicy: "Always"
name: provisioner
securityContext:
privileged: true
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- mountPath: /etc/provisioner/config
name: provisioner-config
readOnly: true
- mountPath: /mnt/sdb
name: local-storage
mountPropagation: "HostToContainer"
volumes:
- name: provisioner-config
configMap:
name: local-provisioner-config
- name: local-storage
hostPath:
path: /mnt/sdb
---
# Source: provisioner/templates/provisioner-service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: local-storage-admin
namespace: nexus3
---
# Source: provisioner/templates/provisioner-cluster-role-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-storage-provisioner-pv-binding
namespace: nexus3
subjects:
- kind: ServiceAccount
name: local-storage-admin
namespace: nexus3
roleRef:
kind: ClusterRole
name: system:persistent-volume-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: local-storage-provisioner-node-clusterrole
namespace: nexus3
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-storage-provisioner-node-binding
namespace: nexus3
subjects:
- kind: ServiceAccount
name: local-storage-admin
namespace: nexus3
roleRef:
kind: ClusterRole
name: local-storage-provisioner-node-clusterrole
apiGroup: rbac.authorization.k8s.io
Storageclass
# Only create this for K8s 1.9+
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
namespace: nexus3
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
# Supported policies: Delete, Retain
reclaimPolicy: Retain
Persistentvolume
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
namespace: nexus3
spec:
capacity:
storage: 80Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/sdb
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node3
PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: local-claim
namespace: nexus3
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
volumeMode: Block
storageClassName: local-storage
Cluster information:
Kubernetes version: how to get this information?
Installation method: Kubespray on Debian VMs
Host OS: Debian
CNI and version: how to get this information?
CRI and version: how to get this information?