Cluster information:
Kubernetes version: v1.16.4
Cloud being used: vSphere, on prem
Installation method: kubeadm
Host OS: Ubuntu 18.04
CNI and version: Canal 3.8
CRI and version: Docker 18.09
I encountered this problem, when trying secure kubernetes dashboard with Network Policies. At the moment of writing, the dashboard consists of two pods: the dashboard itself and the metrics scraper. The dashboard periodically calls the metrics scraper, and thus needs access to it.
It appears, that the way the dashboard accesses the metrics scraper is via the API server proxy. It also appears, the pod and namespace selectors in a NetworkPolicy object do nothing to allow that type of access. I was able to use ipBlock
of 0.0.0.0/0
in order to give access, but this defeats the purpose of having a NetworkPolicy. I would like to be a lot more concrete with what Iām allowing access from.
One peculiar thing, is that when access is working, the incoming IP address in the logs of metric scrape is displayed as 10.244.0.0
which is the entire Canal subnet that Iām using for kubernetes networking. ipBlock
of 10.244.0.0/16
in the NetworkPolicy object also works, allowing access, but it is not much better. Specifying 10.244.0.0/32
(the IP address in the logs) does not allow access.
Note: I was just looking for link to Canal page on Calico web site but it disappeared. Similarly, any mention of Canal also disappeared from the kubernetes website. Iām guessing I should read it as that itās deprecated. Anyway, at this stage I do not know if this is networking provider related or now, my current guess is that itās not.
Below is how to reproduce this issue from scratch, that is without reliance on the dashboard. Any ideas how to resolve it are greatly appreciated.
# Create namespace where all our test objects will reside. At the end we delete the whole namespace with everything in it
kubectl create namespace testing
# This is the service account that we will give access to api server proxy to
kubectl create serviceaccount -n testing curl
# This role describes access to api server proxy for our nginx service, port 80
kubectl create role -n testing curl --verb=get --resource=services/proxy --resource-name=nginx:80
# Bind the account and the role above together
kubectl create rolebinding -n testing curl --role=curl --serviceaccount=testing:curl
# Create a nginx server
kubectl run nginx -n testing --image=nginx --labels="app=nginx" --generator=run-pod/v1
# Create a curl container to test connectivity from
kubectl run curl -n testing --serviceaccount=curl --image=curlimages/curl --labels="app=curl" --generator=run-pod/v1 sleep 999999
# Create service that exposes nginx (mainly for discovery purposes)
kubectl create service -n testing clusterip nginx --tcp=80:80
# Try accessing nginx from the curl container - it works
kubectl exec -n testing -it curl -- curl http://nginx
# And also via api server proxy - it works
kubectl exec -n testing -it curl -- sh -c 'curl -k "https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT_HTTPS/api/v1/namespaces/testing/services/nginx:80/proxy/" --header "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"'
# Look and nginx logs. Note, how the source IP for the api server proxy call. In my case it's x.y.z.0
kubectl logs -n testing nginx
# Let's create and apply NetworkPolicy objects for the pods we just created
cat <<EOF > network-policy.yaml
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: nginx
namespace: testing
spec:
podSelector:
matchLabels:
app: nginx
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: curl
ports:
- protocol: TCP
port: 80
EOF
kubectl apply -f network-policy.yaml
# At this stage straight nginx call is still working:
kubectl exec -n testing -it curl -- curl http://nginx
# But api server proxy call no longer does
kubectl exec -n testing -it curl -- sh -c 'curl -k "https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT_HTTPS/api/v1/namespaces/testing/services/nginx:80/proxy/" --header "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"'
# clean up
kubectl delete namespace testing