Asking for help? Comment out what you need so we can get more information to help you!
Cluster information:
Kubernetes version:
Cloud being used: (put bare-metal if not on a public cloud) bare-metal
Installation method: yaml (docker desktop)
Host OS: Windows/WSL2
CNI and version:
CRI and version:
apiVersion: “sparkoperator.k8s.io/v1beta2”
kind: SparkApplication
metadata:
name: “abc-so”
namespace: abc-spark-apps
spec:
type: Scala
mode: cluster
image: “localhost:5000/abc-img”
imagePullPolicy: Always
mainClass: com.abc
mainApplicationFile: “local:///abc/abc.jar”
sparkVersion: “3.1.1”
volumes:
- name: azure-volume
hostPath:
path: /mnt/c/Users/myuser/.azure
type: Directory
driver:
cores: 1
coreLimit: “1200m”
memory: “512m”
volumeMounts:
- name: azure-volume
mountPath: /home/.azure
labels:
version: 3.1.1
serviceAccount: spark-operator-spark
envVars:
AZURE_CONFIG_DIR: /home/.azure
executor:
cores: 1
instances: 1
memory: “512m”
labels:
version: 3.1.1
volumeMounts:
- name: azure-volume
mountPath: /home/.azure
envVars:
AZURE_CONFIG_DIR: /home/.azure
I know this has been discussed in several forums but I just can’t get it to work on my local machine.
I am using Docker Desktop 4.20.1 (110738) on windows with WSL 2 distros integrated with Ubuntu-22.04. I am a bit new to K8/docker.
I am not understanding why the above configuration does not mount the hostPath.
I tried /run/desktop/mnt/host/c/Users/myuser/.azure as well but can’t seem to get it to work. When I describe Yaml of the pod I don’t this hostPath. I don’t understand why and what I am missing. Any help is appreciated!.