This returns cpu and memory usage of that command executed time.
Heapster was returning list of metrics in each minute from the pod created time to command executed time.
How will we get the same using metrics-server?like memory and cpu usage within a start and end time?
I’m not sure the raw format (mostly use it with HPA), but have you tried it and it doesn’t respond with that information? Maybe just checking is the easiest way
The metrics-server only retains the last recorded value and does not store metrics history. It’s discussed a little bit in the design proposal:
Only the most recent value of each metric will be remembered. If a user needs an access to historical data they should either use 3rd party monitoring solution or archive the metrics on their own (more details in the mentioned vision).
It’s not a good answer =/ but sort of is what it is. Most folk I know deploy prometheus to handle their metrics collection and aggregation.
Hi @mrbobbytables,
Thanks for your reply
Even I have implemented prometheus. In grafana dashboard I’m getting the metrics details collected by prometheus. But it is in GUI only. I need it inside a program or pod, where I can get metrics within a start time and end time to check the performance of the pod. Do you have any idea on this ?
I used kuberenetes client package in python to execute kubectl commands inside a pod and using ‘threding’, collected maximum of metrics within a period.