Week in as full stack YAML developer

apiVersion: "apps/v1"
kind: "Deployment"
metadata:
  name: "sla"
  namespace: "default"
  labels:
    app: "sla"

spec:
  replicas: 1
  selector:
    matchLabels:
      app: "sla"
  template:
    metadata:
      labels:
        app: "sla"
    spec:
      containers:
      - name: "sla-sha256"
        image: "hendry/sla"
---
apiVersion: v1
kind: Service
metadata:
  name: sla
spec:
  type: LoadBalancer
  ports:
  - port: 9000
  selector:
    app: sla

I’m new to k8s, and I hope you don’t mind the newb questions. I am on your slack’s #kubernetes-novice, but it doesn’t appear very active today. It took me a week to get the above going. Still don’t understand the need for metadata/selector.

I’m generally confused whether my YAML is valid or not. Or which values can be omitted, since I prefer to rely on defaults. Do you recommend tools for this? Any linting tools? I’m a Arch/vim CLI user.

My first question is, if I am to push a new image to hendry/sla, how do I trigger an update to my deployment? Editing image: and kubectl apply -f deploy.yaml? What if I want to roll back? Is there a more automated way where the cluster notices an updated image and just deploys it?

Secondly, I know my service listens on port 9000 by default, how would suggest I can expose it off say port 7000 instead?

Any other resources you could please point me to? Thank you!

1 Like

Hi, I tried to explain the why of this metadata/selector here: https://github.com/feloy/aoap/blob/master/README.md#replicaset-controller

Finally, if you could use a selector different than the metadata/label, the controller would create an infinity of new instances before the count of replicas matches

Each time you update your deployment a new replicases is created with the new template of your deployment, and the replicas for this new replicaset is set to N, and the previous one’s to 0. When you change back to an old images the old replicaset is “woke-up”