Applications deployed in k8s taking more time to load

We are using aws 5 nodes kubernetes cluster service and deployed some aplications Kibana,Juypter etc . We are using nodeport to access outside(publicnetwork) . aws host with port number . Now a days to loade those juypter ,kibana applications taking more time if all my team members started using . But, if i check cpu and memory usage its in MB’s . Can experts can help me to find the actual issue and how to fix?
http://kibana-XYZ…amazonaws.com:34340
http://juypter-XYZ…amazonaws.com:34341

Hi @jaikumar

I suspect the added time to load might be related to the time it takes for the image to be pulled onto the Nodes. Would it be possible to get more information about your specific use case?

HI @macintoshprime I am using 5 node aws kubernetes service and deployed ELK ,Jupyter applications. Hosted using aws host

apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: elk
labels:
app: kibana
spec:
ports:

  • port: 80
    name: http
    protocol: TCP
    nodePort: 31234
    targetPort: 5601
    selector:
    app: kibana
    type: NodePort

If we trying to access kibana http://aws-abc.azure.com:31234 Loading time of kibana taking time in minutes and sometimes its not loading. If all my teammembers started using these applications Loading time is too high.

CPU and memory usages of kibana and jupyter pods are less than 50%. I want to know the root cause how to find and how to fix?
Thanks

Is there any CPU or Memory constraints on the Kibana deployment? Also do you have any monitoring (prometheus, etc) in place that might help guide you (not sure what amazon provides)?

Ya. We are using prometheus .
cpu , memory usage using below command:
kubectl top pod --all-namespaces

NAMESPACE                        NAME                      CPU(cores)   MEMORY(bytes)  
    elk                          hub-elk                        3m         812Mi
    elk                          jupyter-elk                  784m        4672Mi
    elk                          kibana-elk                     5m         221Mi

Kibana deployment:

resources:
limits:
cpu: “500m”
memory: “3Gi”
requests:
cpu: “400m”
memory: “2Gi”

In screenshot last 7days of jupyter and kibana cpu memory usages are showing. (Usages are in MB’s only)


Thanks,

Have you ruled out the time to pull the image?

If not, consider doing a kubectl describe pod to see the events and it will show when the pull started, when the container started, etc.

Just for testing purposes have you tried removing the resource limits? Just to isolate if that is causing the issue or not.