Hello,
I want to change the iprange, because there are other servers with the same iprange. So with that config I can’t access them, because it will route internally.
So I want to creat a new cluster with kubeadm with the new pod range. (
apiServer:
certSANs:
- 10.150.x.x
- 10.150.x.x
- 10.150.x.x
- 127.0.0.1
extraArgs:
apiserver-count: "3"
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: ""
controllerManager: {}
dns:
type: CoreDNS
etcd:
external:
caFile: ""
certFile: ""
endpoints:
- http://10.150.x.x:2379
- http://10.150.x.x:2379
- http://10.150.x.x:2379
keyFile: ""
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.13.4
networking:
dnsDomain: cluster.local
podSubnet: "10.150.16.0/20"
serviceSubnet: 10.150.0.0/20
scheduler: {}
I will use weave net, so this is my config for this:
env:
- name: IPALLOC_RANGE
value: 10.150.0.0/20
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: 'docker.io/weaveworks/weave-kube:2.5.1'
My Problem is that all is running, but I can’t ping or connect to the server outside from my cluster. It routes not externally. What can I do? have I misconfigured my cluster or do I need to deploy anything to connect to the other servers in my LAN?
I come from v1.10.3 and now the actually version v1.13.4 so it is possible that I forgot something to deploy that the pod have acces to outside?