Hi all, can anyone help me with my PodSecurityPolicy? I created a new cluster (1.15.4) with kubeadm. I applied this psp to make all pods in the kube-system namespace working:
# Should grant access to very few pods, i.e. kube-system system pods and possibly CNI pods
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
annotations:
# See [https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp](https://slack-redir.net/link?url=https%3A%2F%2Fkubernetes.io%2Fdocs%2Fconcepts%2Fpolicy%2Fpod-security-policy%2F%23seccomp)
[seccomp.security.alpha.kubernetes.io/allowedProfileNames](https://slack-redir.net/link?url=http%3A%2F%2Fseccomp.security.alpha.kubernetes.io%2FallowedProfileNames): '*'
name: privileged
spec:
allowedCapabilities:
- '*'
allowPrivilegeEscalation: true
fsGroup:
rule: 'RunAsAny'
hostIPC: true
hostNetwork: true
hostPID: true
hostPorts:
- min: 0
max: 65535
privileged: true
readOnlyRootFilesystem: false
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
volumes:
- '*'
---
# Cluster role which grants access to the privileged pod security policy
apiVersion: [rbac.authorization.k8s.io/v1](https://slack-redir.net/link?url=http%3A%2F%2Frbac.authorization.k8s.io%2Fv1)
kind: ClusterRole
metadata:
name: privileged-psp
rules:
- apiGroups:
- policy
resourceNames:
- privileged
resources:
- podsecuritypolicies
verbs:
- use
---
# Role binding for kube-system - allow nodes and kube-system service accounts - should take care of CNI i.e. flannel running in the kube-system namespace
# Assumes access to the kube-system is restricted
apiVersion: [rbac.authorization.k8s.io/v1](https://slack-redir.net/link?url=http%3A%2F%2Frbac.authorization.k8s.io%2Fv1)
kind: RoleBinding
metadata:
name: kube-system-psp
namespace: kube-system
roleRef:
apiGroup: [rbac.authorization.k8s.io](https://slack-redir.net/link?url=http%3A%2F%2Frbac.authorization.k8s.io)
kind: ClusterRole
name: privileged-psp
subjects:
# For the kubeadm kube-system nodes
- apiGroup: [rbac.authorization.k8s.io](https://slack-redir.net/link?url=http%3A%2F%2Frbac.authorization.k8s.io)
kind: Group
name: system:nodes
# For all service accounts in the kube-system namespace
- apiGroup: [rbac.authorization.k8s.io](https://slack-redir.net/link?url=http%3A%2F%2Frbac.authorization.k8s.io)
kind: Group
name: system:serviceaccounts:kube-system
After that I installed nginx-ingress with the official helm chart to a new namespace. The chart includes its own psp for the nginx-controller and the default-backend. But K8s selects the privileged psp for those pods.
Is there a way to find out why the privileged psp is chosen? Thanks!
Cluster information:
Kubernetes version: 1.15.4
Cloud being used: bare-metal
Installation method: kubeadm
Host OS: Ubuntu 18.04 LTS
CNI and version: Weave 2.5.2
CRI and version: Docker 18.09
My guess (it happened to me, at least :)) is that the other PSPs need to mutate the yaml and the privileged does not, so that’s why that is applied. It might be because the restricted PSP does not allow to run as root or the image is built as root and the pod security context does not specify to run as not root, then a mutation is needed. And if it has rbac access to another PSP without mutations (as the link with the order says) it will use that.
It can also be because of the second rule: the PSPs name for privileged is before for the restricted one (so, using numbers, IMHO, is the best same way) and that’s why that is chosen.
In other words, policy order is complicated. But try to keep those orders in mind, hopefully is one of the things I mentioned (the most common in my experience :)).
kubectl auth can-i use psp/priviledged --as=system:serviceaccount:default:default
This echos true (even if this psp is only assigned to kube-system namespace) and the warning: Warning: resource ‘podsecuritypolicies’ is not namespace scoped in group ‘extensions’"
Sorry. I’m in my phone and can’t really see/try these.
One thing that I do not follow, though, is why the pod has permission to use privileged PSP. Can you check the rbac role bindings you have? It really shouldn’t. Removing permission to use privileged PSP, will probably fix the issue.
Another path, but you really need to fix the usage of privileged PSP (it’s defeating the purpose of having several PSPs if all can use privileged! :-D) is try using PSP advisor (never tried it myself!) https://github.com/sysdiglabs/kube-psp-advisor/. This might be able to generate a PSP that you want/need for the pod and see what is the difference with the one you have or what fields in the deployment yaml are not specified (and therefore creating a mutation for that and using privileged instead).
Okay, I analyzed the rbac problems now and now I have the following situation:
I test with a non-cluster-admin user. $ kubectl auth can-i use psp/10-default
yes $ kubectl auth can-i use psp/99-privileged
no
So this seems to work now. If I now create a pod with this user, k8s still uses the 99-privileged psp if I request behaviors not allowed by the 10-default psp. I expected an error message, as I’m not allowed to use the privileged psp:
Yes, if it’s being used then it has permission. Something in rbac is allowing
I think the user you might be using with kubectl is not the same creating the resources? Because some k8s component might be creating it, not the user? Not sure how you are doing it.
Also, take into account that you need to re-create the pods after changed to PSP to take into effect, as it is an admission webhook (so it is enforced at creation time)
I found the issue now. There was a ClusterRoleBinding which has granted to many permissions to ServiceAccounts. Now, all works as expected.
Thanks for your help and time @rata.