Applications deployed in k8s taking more time to load

We are using aws 5 nodes kubernetes cluster service and deployed some aplications Kibana,Juypter etc . We are using nodeport to access outside(publicnetwork) . aws host with port number . Now a days to loade those juypter ,kibana applications taking more time if all my team members started using . But, if i check cpu and memory usage its in MB’s . Can experts can help me to find the actual issue and how to fix?

Hi @jaikumar

I suspect the added time to load might be related to the time it takes for the image to be pulled onto the Nodes. Would it be possible to get more information about your specific use case?

HI @macintoshprime I am using 5 node aws kubernetes service and deployed ELK ,Jupyter applications. Hosted using aws host

apiVersion: v1
kind: Service
name: kibana
namespace: elk
app: kibana

  • port: 80
    name: http
    protocol: TCP
    nodePort: 31234
    targetPort: 5601
    app: kibana
    type: NodePort

If we trying to access kibana Loading time of kibana taking time in minutes and sometimes its not loading. If all my teammembers started using these applications Loading time is too high.

CPU and memory usages of kibana and jupyter pods are less than 50%. I want to know the root cause how to find and how to fix?

Is there any CPU or Memory constraints on the Kibana deployment? Also do you have any monitoring (prometheus, etc) in place that might help guide you (not sure what amazon provides)?

Ya. We are using prometheus .
cpu , memory usage using below command:
kubectl top pod --all-namespaces

NAMESPACE                        NAME                      CPU(cores)   MEMORY(bytes)  
    elk                          hub-elk                        3m         812Mi
    elk                          jupyter-elk                  784m        4672Mi
    elk                          kibana-elk                     5m         221Mi

Kibana deployment:

cpu: “500m”
memory: “3Gi”
cpu: “400m”
memory: “2Gi”

In screenshot last 7days of jupyter and kibana cpu memory usages are showing. (Usages are in MB’s only)


Have you ruled out the time to pull the image?

If not, consider doing a kubectl describe pod to see the events and it will show when the pull started, when the container started, etc.

Just for testing purposes have you tried removing the resource limits? Just to isolate if that is causing the issue or not.