I am not sure if discussions about this are already happening somewhere - it is increasingly complicated to made sense of the community structure these days. But I can see at least one similar (and lengthy) discussion over here Volumes are created in container with root ownership and strict permissions · Issue #2630 · kubernetes/kubernetes · GitHub.
I am, basically, representing people who want to use containers for local development. To be able to run various tools locally in a predictable sandboxed environment. For this, we want to be able to mount certain places of interest (such as ~/.aws
, ~/.kube
and so on) from the host inside our containers, so then we could run tools such as aws/helm/kubectl and so on, just like we were running it on the host - so it has all the access from the host. With this powerful pattern, a great deal of local tooling can be built to support such workflows. As a matter of fact - I’ve open sourced such a tool but obviously I am far from being alone in doing something like that, for example there is a great VScode plugin with similar functionality.
The reason I want to use Kubernetes and not DockerD/ContainerD for something like that, is because Kubernetes provides this layer of abstraction so that in my tooling - I do not have to think about implementation details of Docker/Portainer/Rancher/etc. And also because Kubernetes provides additional functionality, such as - ConfigMaps
and Secrets
.
In the nutshell, I want to be able to schedule pods like so:
apiVersion: v1
kind: Pod
metadata:
name: runtainer-297b5df7
namespace: default
spec:
volumes:
- name: runtainer-c8e3292f
hostPath:
path: /Users/me/.aws
containers:
- name: runtainer
image: <some image>
command:
- cat
volumeMounts:
- name: runtainer-c8e3292f
mountPath: /home/me/.aws
I want then to be able to kubectl exec
into that container, and run AWS command line inside with it having all the same access as my host has, via ~/.aws
mount. Or even better - I want my container with AWS federation helper to populate my ~/.aws
on the host, so that later the host as well as other containers may use it. As you understand - this approach provides a great deal of portability without the need to install anything directly on the host. As a matter of fact - I like to think about my runtainer
tool as a portable way of brew
that instead of installing tools on the host - just grabs and runs them in containers:
runtainer hashicorp/terraform apply
The problem arises when the container I am using is not running as root
. Obviously, in such a case - container will simply not have access to any of these mounts, as in-container uid:gid
will not match with the ownership metadata of that mount. This problem can partially be addressed by using fsGroup
:
securityContext:
supplementalGroups:
- 20 # my primary gid on the host
fsGroup: 20 # my primary gid on the host
So then my hostMount
relies on the group ownership and it kind of makes it better, but the problem is still there, and it makes me do certain things that may be deemed insecure. For instance - I need on my host to allow g+rw
to my ~/.aws
which is not secure. And some tools will just not allow that it principal, such as SSH client with ~/.ssh
mounts.
One way to solve this would be to allow securityContext.fsUser
option as well, so that it can be pre-set to the uid
of the container user, which was already suggested in the ticket I mentioned above.
Another way would be to modify runAsUser
and runAsGroup
- to be honest, current implementation I find hardly usable in most cases. Most containers will just not work if the uid:gid
does not have a home folder and/or are not in the /etc/passwd
. A new or modified set of options would not just set uid:gid
for the process, but actually modify current container user on the fly to the new uid:gid
before running its ENTRYPOINT
.
Most of the similar discussions end up suggesting to change the user inside of the container. However (and this is why I started with a use case instead of the actual problem) - I do not want to run my own containers. I wouldn’t have had this problem with my own containers. I want to run containers that other people pre-built and published, containers I have no control over. To ask me to re-build, lets say, official Terraform container and its every single tag just for the sake of changing uid:gid
is simply unreasonable, not to mention that every user of my tooling will have to do this for themselves, as each will have different uid:gid
on the host.
I am kind of lost with this whole complicated process around Kubernetes governance, it doesn’t seem I am allowed to suggest features on the GitHub anymore, nor do I have time to get myself familiar with this whole new KEP thing. To me that sounds like a barrier for the continuous enhancement process rather than a help, but what do I know, so this is the best place I have found so far to share my thoughts. Any ideas how to take this any further?