Exposing kubernetes app using AWS Elastic LoadBalancer


#1

I have a kubernetes cluster with following service created with Type: LoadBalancer -
(Source reference: https://github.com/kenzanlabs/kubernetes-ci-cd/blob/master/applications/hello-kenzan/k8s/manual-deployment.yaml)

apiVersion: v1
Kind: Service
metadata:
 name: hello-kenzan
 labels:
 app: hello-kenzan
spec:
 ports:
  - port: 80
    targetPort: 80
 selector:
   app: hello-kenzan
   tier: hello-kenzan
 type: LoadBalancer

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: hello-kenzan
  labels:
    app: hello-kenzan
spec:
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: hello-kenzan
        tier: hello-kenzan
    spec:
      containers:
      - image: gopikrish81/hello-kenzan:latest
        name: hello-kenzan
        ports:
        - containerPort: 80
          name: hello-kenzan

After I created the service with -

kubectl apply -f k8s/manual-deployment.yaml
kubectl get svc

It is showing External-IP as <pending>
But since I have created a loadbalancer type, why isnt it creating an ip?

FYI, I can access the app using curl <master node>:<nodeport>
Or even I can access it through proxy forwarding.

UPDATE as on 29/1

I followed the answer steps as mentioned in this post https://stackoverflow.com/questions/50668070/kube-controller-manager-dont-start-when-using-cloud-provider-aws-with-kubeadm

  1. I modified the file “/etc/systemd/system/kubelet.service.d/10-kubeadm.conf” by adding the below command under [Service]

    Environment="KUBELET_EXTRA_ARGS=–cloud-provider=aws --cloud-config=/etc/kubernetes/cloud-config.conf

And I created this cloud-config.conf as below -

[Global]
KubernetesClusterTag=kubernetes
KubernetesClusterID=kubernetes

I am not sure what for this Tag and ID refer to but when I run the below command I can see the output mentioning clusterName as “kubernetes”

kubeadm config view

Then I did executed,

systemctl daemon-reload
system restart kubelet
  1. Then as mentioned in that, I added --cloud-provider=aws in both kube-controller-manager.yaml and kube-apiserver.yaml

  2. I also added below annotation in the manual-deployment.yaml of my application

    annotations:
    service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0

Now, when I deployed using kubectl apply -f k8s/manual-deployment.yaml the pod itself is not getting created when I checked with kubectl get po --all-namespaces

So I tried to remove step 2 above and again did deployment and now pod was getting created successfully. But still it is showing <pending> for EXTERNAL-IP when I did kubectl get svc

I even renamed my master and worker node to be same as EC2 Instance Private DNS: ip-10-118-6-35.ec2.internal and ip-10-118-11-225.ec2.internal as mentioned in below post and reconfigured the cluster but still no luck.
https://medium.com/jane-ai-engineering-blog/kubernetes-on-aws-6281e3a830fe (under the section : Proper Node Names)

Also, in my EC2 instances, I can see IAM role attached and when I see the details for that, I can see there are 8 policies applied to that role. And in one of the policy I can see this below and many other Actions are there which I am not posting here -

{
   "Action": "elasticloadbalancing:*",
   "Resource": "*",
   "Effect": "Allow"
}

I am clueless if some other settings I am missing. Please suggest!

UPDATE as on 30/1

I did the below additional steps as mentioned in this blog - https://blog.scottlowe.org/2018/09/28/setting-up-the-kubernetes-aws-cloud-provider/

  1. Added AWS tags to all of my EC2 instances (master and worker nodes) as “kubernetes.io/cluster/kubernetes” and also to my security group

  2. I havent added apiServerExtraArgs, controllerManagerExtraArgs and nodeRegistration manually in configuration file. But what I did was I reset the cluster entirely using "sudo kubeadm reset -f" and then I added this in kubeadm conf file in both master and worker nodes -

    Environment="KUBELET_EXTRA_ARGS=–cloud-provider=aws --cloud-config=/etc/kubernetes/cloud-config.conf

cloud-config.conf -

[Global]
KubernetesClusterTag=kubernetes.io/cluster/kubernetes
KubernetesClusterID=kubernetes

Then executed in both master and worker nodes -

systemctl daemon-reload
system restart kubelet
  1. Now I created the cluster using below command in master node

    sudo kubeadm init --pod-network-cidr=192.168.1.0/16 --apiserver-advertise-address=10.118.6.35

  2. Then I was able to join the worker node to the cluster successfully and deployed flannel CNI.

After this, get nodes showed Ready status.

One important point to note is that there is kube-apiserver.yaml and kube-controller-manager.yaml files in /etc/kubernetes/manifests path.

When I added --cloud-provider=aws in both of these yaml files, my deployments was not happening and pod was not getting created at all. So when I removed the tag --cloud-provider=aws from kube-apiserver.yaml, deployments and pods were success.

When I did modify the yaml for kube-apiserver and kube-controller-manager, both the pods got created again successfully. But since pods were not getting created, I removed the tag from kube-apiserver.yaml alone.

Also, I checked the logs with kubectl logs kube-controller-manager-ip-10-118-6-35.ec2.internal -n kube-system

But I dont see any exceptions or abnormalities. I can see this in last part -

IO130 19:14:17.444485    1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-kenzan", UID:"c........", APIVersion:"apps/v1", ResourceVersion:"16212", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-kenzan-56686879-ghrhj

Even tried to add this below annotation to manual-deployment.yaml but still shows the same <Pending>

service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0

#2

The cloud provider is usually specified when installing kubernetes. How did you install?

Also, is the external IP showed as pending but a load balancer created on Amazon? Can you see it browsing the AWS console? And does it work?

Amazon doesn’t provide an static IP address (except on NLB), so probably you will never see an IP there. Not sure if changes were made to show the DNS name, though (Amazon gives you a DNS name, not a fixed IP).

Is it possible that the load balancer is created and working, just the IP not shown?


#3

Ok to make it simple, I created aws application load balancer myservices and I got following DNS name listed in aws console - internal-myservices-987070943.us-east-1.elb.amazonaws.com

I also has Target Groups created and showing below under Description -
Name as myservices-LB, Protocol as HTTPS , port as 443, Target type as instance, Load Balancer as myservices
Under Targets tab I can see Registered targets showing my Instance ID as i-02dbf9b3a7d9163e7 with Port as 443 and other details… This instance ID is my ec2 instance which I have configured as master node of my kubernetes cluster.

Now when I try to access LB DNS name directly with the URL - internal-myservices-987070943.us-east-1.elb.amazonaws.com/api/v1/namespace s/default/services
I am getting “This site can’t be reached”

Whereas if I proxy forward from my master node instance using kubectl proxy – address 0.0.0.0 --accept-hosts ‘.*’
And then if I access directly my master node ip as below I am able to browse -
10.118.6.35:8001/api/v1/namespaces/default/services

Isn’t it possible to access kubernetes services deployed either as NodePort or Loadbalancer Type to be accessible using direct AWS Loadbalancer DNS name??
I even tested the connectivity using tracert internal-myservices-987070943.us-east-1.elb.amazonaws.com
And I can successfully reach destination 10.118.12.196 in 18 hops

But from my ec2 master node instance it is not tracing. Normally I have proxy set with this command - “export {http,https,ftp}_proxy=http://proxy.ebiz.myorg.com:80”
And I can access even external urls.
Could this be an issue?


#4

You created an internal load balancer, right? See the prefix internal. It will only work inside the VPC.

You need a public one


#5

No actually both my loadbalancer as well as EC2 instances are in same VPC.

From my local machine now I am able to access this URL https://internal-myservices-987070943.us-east-1.elb.amazonaws.com
What I did was - 1) health check was failing in HTTPS 443 port and 2) Installed web server nginx in my EC2 Instance.
So installing nginx and opening SSL port automatically resolved health check issue and I am able to browse the internal LB URL using https.

But still my original problem of creating a loadbalancer using kubernetes svc is not resolved :frowning:
It still shows pending. But my doubt is since both EC2 instance and LB are in same VPC, why isnt traceroute internal-myservices-987070943.us-east-1.elb.amazonaws.com not tracing. I am getting all * * * for all 30 hops. But from my local machine I am able to trace it successfully. So this is the issue why its not creating any external ip ?

But how is it when nginx installed in my EC2 instance is able to access my LoadBalancer but Traceroute is not able to access it I wonder.

Is it possible to directly access my service using Loadbalancer which I manually created via AWS console?? Maybe with NodePort or ingress or something…??

Update:

Only these logs I can see related to AWS in controller logs - 1 aws.go:1041] Building AWS cloud-provider 1 aws.go:1007] Zone not specified in configuration file; querying AWS metadata service. Also, I don’t see this policy in my I am role… “Action”: “s3:", “Resource”: [ "Arn:AWS:s3::: kubernetes-” can this be an issue?

Now after a certain amount of time I see this below log started occurring… 1 controller manager.gi:208] error building controller context: cloud provider could not be initialized: could not init cloud provider “aws”: error finding instance i-02dbf9b3a7d9163e7: “error listing AWS instances: “RequestError: send request failed\ncaused by: Post ec2.us-east1.amazonaws.com: dial tcp 54.239.28.168:443 i/o timeout””