Is "/log" api endpoint performant enough to handle multiple requests?

Hi team,
We have a use case wherein we are scraping logs from every pod present on a cluster and storing those logs in some DataLake. We are scrapping logs by doing an API_Server call.

Lets consider we are doing thousands of such calls every minute to scrape latest logs on thousands of pods running in a cluster. Will this have any impact on API_Server getting overloaded?

I am trying to understand how logs api endpoint works. An alternative that I am aware of is FluentD which basically does a tail -f for logs files situated on a node.

Any leads indicating how much does the “/log” api endpoint performant? would really help me here.

Thanks!!