I have managed to solve it a while ago after more research and help on calico forums. Did not post this here since the deafening silence was really demotivating, but I hope that it will help the next person stumbling upon this problem.
The solution was to whitelist calico ip addresses for each node (since the request can come from any of them):
# Let's create and apply NetworkPolicy objects for the pods we just created
cat <<EOF > network-policy-template.yaml
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: nginx
namespace: testing
spec:
podSelector:
matchLabels:
app: nginx
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: curl
ports:
- protocol: TCP
port: 80
# Access via kubernetes proxy
- from:
- ipBlock:
cidr: $MASTER1/32
- ipBlock:
cidr: $MASTER2/32
- ipBlock:
cidr: $MASTER3/32
ports:
- protocol: TCP
port: 80
EOF
# Get Calico ip address for each node
export MASTER1=$(kubectl get node master-01 -ojson | jq -r '.metadata.annotations."projectcalico.org/IPv4IPIPTunnelAddr"')
export MASTER2=$(kubectl get node master-02 -ojson | jq -r '.metadata.annotations."projectcalico.org/IPv4IPIPTunnelAddr"')
export MASTER3=$(kubectl get node master-03 -ojson | jq -r '.metadata.annotations."projectcalico.org/IPv4IPIPTunnelAddr"')
# put the addresses to the policy file and apply policy
envsubst < network-policy-template.yaml > network-policy.yaml
kubectl apply -f network-policy.yaml
# Now the api server proxy call works
kubectl exec -n testing -it curl -- sh -c 'curl -k "https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT_HTTPS/api/v1/namespaces/testing/services/nginx:80/proxy/" --header "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"'
Here are threads that I opened on SE and Calico forums that ultimately led to this solution:
- What is `tunl0@NONE` and how is its IP assigned? - Open Source Calico Help - Discuss Calico
- nginx - What does it mean, when remote address is x.y.0.0 in web server logs? - Server Fault
The .0
concern in my initial post turned out to be a red herring. In Calico CIDR it’s just a normal ip address. It does not necessary has to be .0
it just happened in my case that Calico create one like that. I’ve seen it create not .0
addresses for nodes since then.
I’ll copy the explanation from that other thread I linked above, as to why these IP addresses are necessary:
The Kubernetes API Server runs in the host’s network namespace (either as a pod, or just as a binary, depending on the distro). It isn’t a regularly networked pod with its own per-pod IP address.
When a process in the host’s network namespace (API Server or any other process) connects to a pod, Calico knows it needs to encapsulate the packet in IPIP before sending it to the remote host. It chooses the tunnel address as the source so that we ensure that the remote host knows to encapsulate the return packets. In IPIP mode, the underlying network doesn’t know what to do with packets that have pod IP addresses on them, and might drop them. So, by encapsulating we ensure the return packets are delivered.
The IP addresses that I got from node annotations above can also be obtained by running calicoctl
for specific nodes:
calicoctl get node master-01 -ojson | jq -r '.spec.bgp.ipv4IPIPTunnelAddr'