Kubernetes job workload container getting deleted after completion

Hi Team,

Here is my cluster information

Cluster information:

Kubernetes version:  v1.28.5
Cloud being used: bare metal
Installation method: kubeadm
Host OS: Fedora
CNI and version: cilium 0.3.1
CRI and version: cri-o 1.27.2

I am trying to deploy a sample job in kubernetes cluster environment. After the job execution is completed. The respective container in the corresponding job’s pod is getting deleted after sometime.

Error - E0719 00:00:00.409361 3174 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="pod-uid" containerName="pi"

job -

apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl:5.34.0
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
  backoffLimit: 4

Do we have any way to ensure that cpu_manager does not removed the container after the job completion as once the container is removed i am unable to fetch the logs to analyze further and this is the error i see when i try to retrieve the container logs.

unable to retrieve container logs for docker://8efc123ae26dacebb8624eec7af8852f9205f42e2e2fcad586fd13eb4xxxxx

As per kubernetes docs it mentions the container logs should be available after the execution completes by default.

I tried to set the “.spec.ttlSecondsAfterFinished” to ensure that the job and its corresponding pod and container logs are available for the number of seconds mentioned but that seems to have no effect.

Please suggest