I’ve have problem with service account, in one cluster if I create service account with rolebinding to access only secret in specific namespace it works (k3d v 1.18.6), in another one (kubespray 1.17.7) exactly same service account with the same rolebinding has access to everything. Is it a common issue, do you know where to look?
This is how I create account and role
apiVersion: v1
kind: Namespace
metadata:
labels:
name: ssl-example-com
name: ssl-example-com
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: example-com-sa
namespace: ssl-example-com
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: ssl-example-com
name: ssl-example-com-secret-reader
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: example-com-sa-rolebinding
namespace: ssl-example-com
subjects:
- kind: ServiceAccount
name: example-com-sa
namespace: ssl-example-com
roleRef:
kind: Role
name: ssl-example-com-secret-reader
apiGroup: rbac.authorization.k8s.io
script I use for generating kubeconfig.
# Update these to match your environment
SERVICE_ACCOUNT_NAME=example-com-sa
CONTEXT=$(kubectl config current-context)
NAMESPACE=ssl-example-com
NEW_CONTEXT=ssl-example-com
KUBECONFIG_FILE="kubeconfig-sa"
SECRET_NAME=$(kubectl get serviceaccount ${SERVICE_ACCOUNT_NAME} \
--context ${CONTEXT} \
--namespace ${NAMESPACE} \
-o jsonpath='{.secrets[0].name}')
TOKEN_DATA=$(kubectl get secret ${SECRET_NAME} \
--context ${CONTEXT} \
--namespace ${NAMESPACE} \
-o jsonpath='{.data.token}')
TOKEN=$(echo ${TOKEN_DATA} | base64 -d)
# Create dedicated kubeconfig
# Create a full copy
kubectl config view --raw > ${KUBECONFIG_FILE}.full.tmp
# Switch working context to correct context
kubectl --kubeconfig ${KUBECONFIG_FILE}.full.tmp config use-context ${CONTEXT}
# Minify
kubectl --kubeconfig ${KUBECONFIG_FILE}.full.tmp \
config view --flatten --minify > ${KUBECONFIG_FILE}.tmp
# Rename context
kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
rename-context ${CONTEXT} ${NEW_CONTEXT}
# Create token user
kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
set-credentials ${CONTEXT}-${NAMESPACE}-token-user \
--token ${TOKEN}
# Set context to use token user
kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
set-context ${NEW_CONTEXT} --user ${CONTEXT}-${NAMESPACE}-token-user
# Set context to correct namespace
kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
set-context ${NEW_CONTEXT} --namespace ${NAMESPACE}
# Flatten/minify kubeconfig
kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
view --flatten --minify > ${KUBECONFIG_FILE}
# Remove tmp
rm ${KUBECONFIG_FILE}.full.tmp
rm ${KUBECONFIG_FILE}.tmp
Cluster information:
Kubernetes version: 1.18 and 1.17
Cloud being used: (on-prem both)