How to deploy an application on different nodes


I use three VMs in my laptop Oracle VirtualBox as below:

NAME        STATUS   ROLES                  AGE    VERSION
localhost   Ready    control-plane,master   7d7h   v1.21.0
saeed       Ready    <none>                 4d2h   v1.21.0
saeed2      Ready    <none>                 17h    v1.21.0

As you see, localhost is the Master node.

I’m new to Kubernetes but I recently learnt how to use Docker and deploy an application on it, but I have some questions about deploying application on Kubernetes and I did not found my answer by googling.

Suppose the above conditions, I want to run a simple WordPress application with MySQL.
I want to run WordPress on saeed and MySQL on saeed2. How may I reach this?
I want to control everything with my Master node. I mean like creating .yml file and telling which yaml file to run on which worker.

I saw this tutorial but I think it’s for just one single node.

Also since K8S manages load balancing itself, is there an automatic way to to run a yaml file on Master node and then the Master automatically deploys them on different nodes?

Thanks in advance

Hey @saeed
There are two methods:

  1. nodeName field

  2. nodeSelector field

1 Like

Great, thanks for the reply.
Regarding nodeName and nodeSelector I really appreciate.

For nodeSelector, I tried to do that in Example: Deploying WordPress and MySQL with Persistent Volumes | Kubernetes but I get this error with below explanation:


Change in both file the part spec: containers: ... to:

        nodeName: saeed
      - image: wordpress:4.8-apache

And respectively for mysql file.
Then run this command:

cat <<EOF >>./kustomization.yaml
  - mysql-deployment.yaml
  - wordpress-deployment.yaml

But I get this error:

error: accumulating resources: accumulation err='accumulating resources from 'mysql-deployment.yaml': yaml: line 21: did not find expected key': got file 'mysql-deployment.yaml', but '/home/pods/mysql-deployment.yaml' must be a directory to be a root

I also tried kubectl apply -f ./kustomization.yaml:

error: error validating "./kustomization.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false

Is there anything wrong with this documentation? Or I am missing something?

Regarding the link Assigning Pods to Nodes | Kubernetes you provided, if I understand correctly: I have one master and two worker nodes. If I deploy application, Kubernetes sees which node has more free resources and spaces and then creates image and runs container on that node. Am I right? Or I should concern other things to reach this?

Hi @saeed what command are you running when you get ?

error: accumulating resources: accumulation err='accumulating resources from 'mysql-deployment.yaml': yaml: line 21: did not find expected key': got file 'mysql-deployment.yaml', but '/home/pods/mysql-deployment.yaml' must be a directory to be a root

I tried running the configs you posted with the modifications you made and things seemed to work as expected with the following command.

kubectl apply -k ./

Let me know, happy to help out more.

1 Like

Hi @macintoshprime, I only run this command:

kubectl apply -k ./

Sorry didn’t catch this earlier but try changing the position of the nodeName key.

Like here:

      nodeName: saeed
      - image: wordpress:4.8-apache
1 Like

Thanks, I changed it in both files, and now I got this message:

ervice/wordpress configured
service/wordpress-mysql unchanged
persistentvolumeclaim/wp-pv-claim created
deployment.apps/wordpress created
deployment.apps/wordpress-mysql created
The PersistentVolumeClaim "mysql-pv-claim" is invalid: 
* spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
        AccessModes: {"ReadWriteOnce"},
        Selector:    nil,
        Resources: core.ResourceRequirements{
                Limits: nil,
-               Requests: core.ResourceList{
-                       s"storage": {i: resource.int64Amount{value: 2147483648}, s: "2Gi", Format: "BinarySI"},
-               },
+               Requests: core.ResourceList{
+                       s"storage": {i: resource.int64Amount{value: 21474836480}, s: "20Gi", Format: "BinarySI"},
+               },
        VolumeName:       "",
        StorageClassName: nil,
        ... // 2 identical fields

* Forbidden: field can not be less than previous value

I cannot find any duplicate or identical names, but I’m not sure how it is throwing this error.
I also changed the capacity field to 2Gi in order to reduce the size for testing purposes but I see above it’s showing 20Gi which is the default value of these two yml files.

I also do not see any newly ran containers on my worker nodes.

You’ll want to delete the existing PVCs and create new ones (assuming nothing important is in them). Things should come online after that. I would take a look at the deploy or even the replicaset to see why the new pods aren’t coming online. It could be due to the PVCs or something else.