Hosting a Jenkins Slave on a Different Kubernetes Cluster

Cluster information:

Kubernetes version: v1.28.11
Cloud being used: (put bare-metal if not on a public cloud): VMware
Installation method: kubeadm
Host OS: Debian 6.1.52-1 (2023-09-07) x86_64 GNU/Linux
CNI and version: flannelcni/flannel:v0.20.2
CRI and version: containerd containerd.io 1.7.18

I have two clusters: jenkp and jenkpba. On the jenkp cluster, my Jenkins (Master) application is up and running. I want to create a Jenkins build agent (slave) on the jenkpba cluster using Jenkins hosted on the jenkp cluster.

I created a service account on the jenkpba cluster, attached a separate secret to the service account, and generated a token. The service account has full permissions as specified in the following YAML configuration:
“”"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8s-jenkins-crb
subjects:

  • kind: ServiceAccount
    name: k8s-jenkins
    namespace: jenkins
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: cluster-admin

“”"

When I use this token inside a pod by storing it in a variable and executing a command, it works successfully:

jenkins@jenkins1-abcdefg-wrph7:/ curl -k -H “Authorization: Bearer $jenkpba” https://10.10.x.x:6443/api
{
“kind”: “APIVersions”,
“versions”: [
“v1”
],
“serverAddressByClientCIDRs”: [
{
“clientCIDR”: “0.0.0.0/0”,
“serverAddress”: “10.10.x.x:6443”
}
]
}

However, when I try to use this token to authenticate with Kubernetes through the Jenkins master, the authentication fails.

I would appreciate your support in troubleshooting the communication between Kubernetes and Jenkins.

Assuming (correct me If I’m wrong) the output of this is from Jenkins master cluster and not from Jenkins slave pod.

  1. What does Jenkins master logs and kuberneter cluster (where slave is) shows when you test connection?
  2. Is there a proxy setting in Jenkins configuration that you can add as exception for this specific connection?
  3. Try using the kubectl auth can-i command within the pod to verify if the service account token has the necessary permissions. This can help you isolate whether the issue lies with the token itself or the way Jenkins is using it.

Yes, the output shown in the screenshot is from the Jenkins master logs.

The slave machine needs to be created at runtime when we run a Jenkins pipeline job and specify the pod template in the pipeline.

Here are the logs from the Jenkins master pod:
‘’’
java.io.IOException: Unable to tunnel through proxy. Proxy returns “HTTP/1.1 407 Proxy Authentication Required”
at java.base/sun.net.www.protocol.http.HttpURLConnection.doTunneling0(Unknown Source)
at java.base/sun.net.www.protocol.http.HttpURLConnection.doTunneling(Unknown Source)
at java.base/sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(Unknown Source)
at …
‘’’

The logs indicate an error:

IOException: Unable to tunnel through proxy. Proxy returns “HTTP/1.1 407 Proxy Authentication Required”

It seems Jenkins is unable to tunnel through the proxy due to missing proxy authentication settings. I’m unsure where to configure this setting.

The service account has the necessary permissions, so that shouldn’t be the issue.
image