Asking for help? Comment out what you need so we can get more information to help you!
Hello All,
Randomly we are seeing a issue, when node is rebooted and joins as part of cluster node port functionality doesnot work through the rebooted node. After the restarting of the kube-proxy pod (deleting the pod) everything works as expected.
Node was in ready state and accepts the workload pods. This is observed on worker nodes
Cluster information:
Kubernetes version:
Cloud being used: bare metal
Installation method: kubeadm
Host OS: centos 7
CNI and version: juniper contrail
CRI and version:
Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.20.2
RuntimeApiVersion: v1alpha1
Test pod yaml:
apiVersion: v1
kind: Namespace
metadata:
name: node-port
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
namespace: node-port
data:
nginx.conf: |
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
server {
listen 80;
listen [::]:80;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
return 200 '{"hostname": "$hostname"}';
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
server {
listen 8443;
listen [::]:8443;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
return 200 '{"hostname": "$hostname"}';
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: node-port
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
tolerations:
- key: "node.kubernetes.io/unreachable"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 2
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 2
containers:
- name: nginx
image: nginx:1.18.0
ports:
- containerPort: 80
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
readOnly: true
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: nginx.conf
path: nginx.conf
---
apiVersion: v1
kind: Service
metadata:
name: nginx-np-test
namespace: node-port
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 80
name: http80
- port: 8443
protocol: TCP
targetPort: 8443
name: http8443
selector:
app: nginx
---
apiVersion: v1
kind: Service
metadata:
name: node-port-session-affinity
namespace: node-port
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 80
name: http80
- port: 8443
protocol: TCP
targetPort: 8443
name: http8443
sessionAffinity: ClientIP
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-replica-deployment
namespace: node-port
spec:
selector:
matchLabels:
app: nginx1
replicas: 1
template:
metadata:
labels:
app: nginx1
spec:
tolerations:
- key: "node.kubernetes.io/unreachable"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 2
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 2
containers:
- name: nginx
image: nginx:1.18.0
ports:
- containerPort: 80
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
readOnly: true
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: nginx.conf
path: nginx.conf
---
apiVersion: v1
kind: Service
metadata:
name: nginx-backup-svc
namespace: node-port
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 80
name: http180
- port: 8443
protocol: TCP
targetPort: 8443
name: http18443
You can format your yaml by highlighting it and pressing Ctrl-Shift-C, it will make your output easier to read.