Capturing Logs of Docker container running inside kubernetes pods

I have two docker containers running inside a pod

I would like to access the stdout of one container inside the other container

The second container is running a shell script that is supposed to analyse the stdout of the main container.

A possible example of what I want to do is as follows:

  • I start a pod encapsulating 2 container (1. container having postgres/zookeeper/random nodejs program, 2. sidecar container running my script in the background)
  • say Postgres/zookeeper logs to the stdout which then somehow gets piped to a shared volume or is directly read by the script in the sidecar container
  • The script matches the log with a pattern and performs a CURL request

Is this possible? What are some standard practices to implement this functionality.

I tried dumping the logs of the container inside a file but I was unable to figure out how to do the same for default docker images of applications such as postgres v15.3 and zookeeper v3.6.4.

For example in an arbitrary nodejs program I can just do something like
CMD node index.js > /var/log/temp/logs.txt
inside the Dockerfile to stream the stdout into a file. Would it be possible to do the same for images like postgres, zookeeper, kafka etc

====================

Kubectl:

Client Version: v1.28.4
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.3

Minikube:
minikube version: v1.32.0
commit: 8220a6eb95f0a4d75f7f2d7b14cef975f050512d

Cluster information:

Kubernetes version:
Cloud being used: bare-metal
Host OS: Debian GNU/Linux 12 (bookworm)

Hi,
Please check below article. I assume that is what you are looking for.