Network policy - Allow only one IP

network

#1

Hi there,
I want to expose my pods with a nodeport service but only on the ip : 192.168.178.198 and block the rest. For this, i use a network policy but it doesn’t work. Can you help me pls ? :slight_smile:

Capture


#2

I think that best practise for doing something like that would be to have a LoadBalancer in front of your cluster.

If you can do that possible an external firewall rule that blocks access to the other nodes.

Assuming the IP provided is on of the nodes.


#3

Hi,
I use an ingress controller but not a load balancer. I saw a way I could use the annotations to do it. But I’d like to try to do it with network policies.

Looking around a little bit, I realized that if it didn’t apply, it was because once I got into my cluster, my client ip was changed to be preserved. But then I don’t see how to fix this.


#4

I just want to make sure I am looking at this right. You want to expose the services (via nodeport) on IP 192.168.178.198 (one of the clusters) and only that IP regardless of where the pod is within the cluster?


#5

Nope, I want to expose my service with a node port ( :30008).

Ip node 1 : 192.168.178.191
Ip node 2 : 192.168.178.192

My client ( 192.168.178.198) can type in web browser : :30008 and gain access to my pods. If another client ( 192.168.178.X) wants to do the samethings, i can’t gain accès to the pod :slight_smile:


#6

Oh awesome then ya network policies should be good for that. What CNI are you using?

I have meetings for the next few hours, I’ll take another look at your rules hopefully during lunch.


#7

Thx for your help !
I’m using Calico :slight_smile:


#8

k good Calico supports network policies. So the NP looks good, unless I’m missing a typo or something. What you tried any other policies just to test if the are ever enforced? Something overly strict like, https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/01-deny-all-traffic-to-an-application.md


#9

Yeah i already tried this.
And it’s worked. But i just deny all so it’s logic :slight_smile:
The problem is simple for me, when i try to connect to my pod, my source IP be translated by the service. So the network policy work but the source IP is translated by my service :confused:


#10

I took another look at the docs and I came across this line which sounds like what you are dealing with:

ipBlock : This selects particular IP CIDR ranges to allow as ingress sources or egress destinations. These should be cluster-external IPs, since Pod IPs are ephemeral and unpredictable.

Cluster ingress and egress mechanisms often require rewriting the source or destination IP of packets. In cases where this happens, it is not defined whether this happens before or after NetworkPolicy processing, and the behavior may be different for different combinations of network plugin, cloud provider, Service implementation, etc.

I haven’t run into that issue so I’m not aware of what the work arounds would be in terms of the networkpolicy. Blocking at the ingress level might work


#11

Hi,
I’m sorry for the response time, my boss assigned me another task and I didn’t have time to work on my network policy again.
I’ve already made an ingress with an annotation.
It works but unfortunately my boss absolutely doesn’t want me to use the annotations (being an intern, I can just keep quiet)…


#12

No worries, work happens :slight_smile:

Curious, what are his reasons for wanting to avoid annotations? If you can’t disclose I understand.