Cluster information:
Kubernetes version: 1.14.8-gke.12
Cloud being used: GKE
Installed pods
I have installed DGraph usings its helm charts and that keeps all services internally as ClusterIP. That makes sense because the database has only very basic security. After the setup, I got the following pods:
web goci-dgraph-alpha ClusterIP 10.126.9.214 8080/TCP,9080/TCP
web goci-dgraph-alpha-headless ClusterIP None 7080/TCP
web goci-dgraph-ratel ClusterIP 10.126.10.114 8000/TCP
web goci-dgraph-zero ClusterIP 10.126.2.66 5080/TCP,6080/TCP
web goci-dgraph-zero-headless ClusterIP None 5080/TCP
I can port-forward locally and acces the DB & console like so:
kubectl port-forward dgraph-alpha-0 8080
kubectl port-forward dgraph-ratel 8000
That stuff works.
However, now I need an ingress controller to access the alpha & ratel node from the outside world that that is where the headache starts.
First, I made a TLS secret like so:
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj “/CN=nginxsvc/O=nginxsvc”
$ kubectl create secret tls tls-secret --key tls.key --cert tls.crt
Next, I wrote an ingress.yaml like so:
https://github.com/marvin-hansen/ngnix-k8s/blob/master/Ingress.yaml
And I added a service.yaml like so:
https://github.com/marvin-hansen/ngnix-k8s/blob/master/service.yaml
Then, I added an A record to the domain so that it points to the public IP of the LoadBalancer.
However, this doesn’t work and I get no connect to the ingress.
What am I doing wrong?
Also, this is not about exposing the DB & Console through ingress, but ultimately about adding TLS & authenication because to secure the DB while ensuring external access. Any help is most welcome. TIA