Why I am getting Read only file system error from Nginx in my container?

Dear K8S community Team,

I am getting this error message from nginx when I deploy my application pod. My application an angular6 app is hosted inside an nginx server, which is deployed as a docker container inside EKS.

I have my application configured as a “read-only container filesystem”, but I am using “ephemeral mounted” volume of type “emptyDir” in combination with a read-only filesystem.

So I am not sure the reason of this following error:

2019/04/02 14:11:29 [emerg] 1#1: mkdir() “/var/cache/nginx/client_temp” failed (30: Read-only file system) nginx: [emerg] mkdir() “/var/cache/nginx/client_temp” failed (30: Read-only file system)

my deployment.yaml

```
spec:
      volumes:
        - name: tmp-volume
          emptyDir: {}
        # Pod Security Context
      securityContext:
        fsGroup: 2000
      containers:
      - name: {{ .Chart.Name }}
        volumeMounts:
        - mountPath: /tmp
          name: tmp-volume
        image: "{{ .Values.image.name }}"
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        securityContext:
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - ALL
        securityContext:
          readOnlyRootFilesystem: true
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
...
```

nginx.conf


```
...
http {

include           /etc/nginx/mime.types;
  default_type      application/octet-stream;

  # Turn off the bloody buffering to temp files
  proxy_buffering off;

  sendfile          off;
  keepalive_timeout 120;

  server_names_hash_bucket_size 128;

  # These two should be the same or nginx will start writing 
  #  large request bodies to temp files
  client_body_buffer_size 10m;
  client_max_body_size    10m;
...
```

Am I missing something or “/var/cache/nginx/client_temp" is on the read only filesystem?

You are mounting only /tmp, right?

you are right! Now i am redirecting nginx to create files here at my mounted volume:

nginx.conf


..
http {

client_body_temp_path /tmp 1 2;
proxy_temp_path /tmp 1 2;
fastcgi_temp_path /tmp 1 2;
uwsgi_temp_path /tmp 1 2;
scgi_temp_path /tmp 1 2;

...
  server {
        listen 0.0.0.0:80;

but now getting this error:

2019/04/02 15:22:43 [emerg] 1#1: bind() to 0.0.0.0:80 failed (13: Permission denied)

nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied)

again, here is my deployment.yaml


volumes:
        - name: tmp-volume
          emptyDir: {}
        # Pod Security Context
      securityContext:
        fsGroup: 2000
      containers:
      - name: {{ .Chart.Name }}
        volumeMounts:
        - mountPath: /tmp
          name: tmp-volume
        image: "{{ .Values.image.name }}"
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        securityContext:
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - ALL
        securityContext:
          readOnlyRootFilesystem: true
          ports:
            - name: http
              containerPort: 80
              protocol: TCP

Permission denied to bind to port 80 is probably because running as a non root user. You would need to use other port or give permissions, I think

which permissions to give and how?

If you are not running your containers as root, you probably need to set the capability of you need port 80. Like this: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-capabilities-for-a-container

But please note that you probably don’t need it, you can have a service expose port 80 and route it to your pod on port 8080. That is simpler and maybe more secure :slight_smile:

There are few things to keep in mind if you are running Read Only filesystem.
Ports need to be higher than <1024. E.g. Port 80 will not work, but 8080 will.

For Nginx, another thing to keep in mind is that you need to alter your nginx.conf. Because the image for Nginx writes the PID to /var/run/ (I think this is correct), as well as the logs you are going to get some errors initially. So, you should probably redirect these to a /tmp folder.
Here is my nginx.conf file:

user  nginx;
worker_processes  1;

error_log  /tmp/error.log warn;
pid        /tmp/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /tmp/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    # Temporary directories for kubernetes "readonlyfilesystem"
    client_body_temp_path /tmp/nginx-client-body;
    proxy_temp_path       /tmp/nginx-proxy;
    fastcgi_temp_path     /tmp/nginx-fastcgi;
    uwsgi_temp_path       /tmp/nginx-uwsgi;
    scgi_temp_path        /tmp/nginx-scgi;

    include /etc/nginx/conf.d/*.conf;
}

And this is my Dockerfile (I wanted to make sure that the directory /etc/nginx/certs was there at all times rather than during mounting)

FROM nginx:1.16.1

# Create the certificate directory
RUN mkdir -p /etc/nginx/certs

# Replace default nginx.conf
COPY conf/nginx.conf /etc/nginx/nginx.conf

So basically you need to mount 3 directories (at least for me):

  1. You need to mount the /tmp directory so you can get anything that the system needs to write to
  2. I mounted a directory just for the certs in /etc/nginx/certs, because I’m using Nginx for TLS reverse proxy too.
  3. I mounted a directory for my conf files – or perhaps you can just mount the file that you have and replace the default.conf that comes nginx in /etc/nginx/conf.d/