Pvc and pv

Hi,
When a pvc is created an a pod uses it,then this pvc is created in master node or in worker node where this pod runs?
When expose a pod by a service,then this service runs on master node?

more pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 9Gi
accessModes:
- ReadWriteOnce
hostPath:
path: “/y”

more pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi

kubectl get pvc
NAME            STATUS   VOLUME           CAPACITY   ACCESS 
task-pv-claim   Bound    task-pv-volume   9Gi        RWO         

is correct to see 9gi instead 4gi?

PV of type hostpath are intended to be used in clusters with a single node, or it won’t work correctly.

The storage runs from the perspective of the WORKER node. In your exmaple, as feloy noted, you’re using local storage, so if you have multiple workers and the pod requires persistent storage, this is not a viable option for you. You need to implement a shared-storage plane. Take a look at the Kubernetes Storage Types for the options available. If you’re in a cloud environment, your native cloud storage is by far the easiest and highly efficient.

Kubernetes Services define what pod should has exposure to the public (outside of the cluster) – a firewall, but not. If you look at your workers networking stack (netstat -ntlp) when you have a pod with a service deployed, you’ll see that EVERY worker is holding open the same set of ports and the process that owns those ports is kube-proxy. This is why you can hit the application from targeting any one of your worker nodes directly even though a pod with 1 container is only ever running on 1 worker at any given time.