Kubernetes w/ OpenShift (4.6+) RHOCP : Renew certificates when cluster is offline

Hi All,

I’ve a Kubernetes w/ OpenShift cluster that has failed sometime back and wasn’t started up for some time for various reasons. Now that I’m bringing the cluster back up, I noticed all the certificates have expired. The cluster refuses to start on account of the certs expiring.

I’ve tried to find a way to renew the certificates however there is no kubeadm command and oc and kubectl commands print an error indicating there is no connectivity to the cluster.

How do I renew the certificates when the cluster is offline? There is a few dozen certificates, half of which are expired, within the /etc/kubernetes/static-pod-resources/ folder and without working commands, I cannot force a certificate renewal.

How could I renew the certificates in this state? Port 6443 is offline on the 3 Masters and 3 Workers. kubelet fails to start the service on port 6443 since the certificates are expired.


Cluster information:

Kubernetes version: 1.20
Cloud being used: VMware
Installation method: UPI
Host OS: RH CoreOS 8
CNI and version: ?
CRI and version: ?


During the installation process, Red Hat recommends generating an SSH keypair to connect to CoreOS nodes in case of emergency.

Having access to one of your master nodes, maybe you can access the CA’s certificate. According to OCP’s documentation, the bootstrap CA is valid for 10 years and the kubelet-managed certificates are valid for 1 year.

If you are able to use the cluster’s CA, then you may create a valid certificate to start the kube-apiserver and use it on the kubelet. At this point, your cluster (1 control plane node) may be up (assuming the etcd has started). Your cluster may be in read only mode, as just one of the three control plane nodes is available. You should be able to repeat the process to join the rest of the control plane nodes and then manually reproduce the Kubelet TLS bootstrapping process, creating, signing and distributing the certificates on the worker nodes.

Even if you managed to recreate all the certs, you mentioned that the cluster “failed sometime back”, so maybe a better solution would be to just redeploy a new cluster.

I’ll go through the posts! Thanks very much Xavi. This is very helpfull!

One of the things that I did to get past the cert issue is to just set the clock back to see if the certs were the issue. Since after the clock was set back, the certificate error messages disappeared however noticed that kubelet still didn’t start because ETCD was apparently unable to start. For reasons yet unknown.

[root@rhcpm01 etcd]# ls -altri
total 40
702546467 drwxr-xr-x. 2 root root   52 May 10 05:54 .
702546437 -rw-------. 1 root root 8405 Dec 27  2021 11253.log
702546441 -rw-------. 1 root root 8249 Jan  9  2022 2891.log
593494629 drwxr-xr-x. 7 root root 4096 Jan  9  2022 ..
702546439 -rw-------. 1 root root 8249 Jan  9  2022 3.log
[root@rhcpm01 etcd]# cat 3.log
2022-01-09T03:57:01.230232265+00:00 stderr F {"level":"warn","ts":"2022-01-09T03:57:01.229Z","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-ad3b7ce4-62e5-4bda-9284-52290fcf103c/10.0.0.106:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial tcp 10.0.0.107:2379: connect: connection refused\""}
2022-01-09T03:57:01.230232265+00:00 stderr F Error: context deadline exceeded
2022-01-09T03:57:01.241144602+00:00 stderr F dataDir is present on rhcpm01.osc01.nix.mds.xyz
2022-01-09T03:57:03.242675979+00:00 stderr P failed to create etcd client, but the server is already initialized as member "rhcpm01.osc01.nix.mds.xyz" before, starting as etcd member: context deadline exceeded
2022-01-09T03:57:03.244996343+00:00 stdout P Waiting for ports 2379, 2380 and 9978 to be released.
2022-01-09T03:57:03.259047946+00:00 stdout F ETCD_INITIAL_CLUSTER_STATE=existing
2022-01-09T03:57:03.259047946+00:00 stdout F ETCD_HEARTBEAT_INTERVAL=100
2022-01-09T03:57:03.259047946+00:00 stdout F ETCD_INITIAL_CLUSTER=
2022-01-09T03:57:03.259047946+00:00 stdout F ETCD_ENABLE_PPROF=true
2022-01-09T03:57:03.259047946+00:00 stdout F ETCDCTL_CERT=/etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-rhcpm01.osc01.nix.mds.xyz.crt
2022-01-09T03:57:03.259047946+00:00 stdout F ETCD_DATA_DIR=/var/lib/etcd
2022-01-09T03:57:03.259047946+00:00 stdout F ETCD_IMAGE=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:967456e3fbb25822d51fa7f919f2ea06a0fe48f9a152bb43d3c4d9ccfa6863c8
2022-01-09T03:57:03.259047946+00:00 stdout F ALL_ETCD_ENDPOINTS=https://10.0.0.106:2379,https://10.0.0.108:2379,https://10.0.0.107:2379
2022-01-09T03:57:03.259047946+00:00 stdout F ETCDCTL_ENDPOINTS=https://10.0.0.106:2379,https://10.0.0.108:2379,https://10.0.0.107:2379
2022-01-09T03:57:03.259047946+00:00 stdout F ETCDCTL_API=3
2022-01-09T03:57:03.259047946+00:00 stdout F ETCD_CIPHER_SUITES=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
2022-01-09T03:57:03.259047946+00:00 stdout F ETCD_ELECTION_TIMEOUT=1000
2022-01-09T03:57:03.259047946+00:00 stdout F ETCDCTL_CACERT=/etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt
2022-01-09T03:57:03.259047946+00:00 stdout F ETCD_NAME=rhcpm01.osc01.nix.mds.xyz
2022-01-09T03:57:03.259047946+00:00 stdout F ETCD_QUOTA_BACKEND_BYTES=7516192768
2022-01-09T03:57:03.259047946+00:00 stdout F ETCDCTL_KEY=/etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-rhcpm01.osc01.nix.mds.xyz.key
2022-01-09T03:57:03.260056652+00:00 stderr F + exec ionice -c2 -n0 etcd --initial-advertise-peer-urls=https://10.0.0.106:2380 --cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-serving/etcd-serving-rhcpm01.osc01.nix.mds.xyz.crt --key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-serving/etcd-serving-rhcpm01.osc01.nix.mds.xyz.key --trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt --client-cert-auth=true --peer-cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-rhcpm01.osc01.nix.mds.xyz.crt --peer-key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-rhcpm01.osc01.nix.mds.xyz.key --peer-trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt --peer-client-cert-auth=true --advertise-client-urls=https://10.0.0.106:2379 --listen-client-urls=https://0.0.0.0:2379 --listen-peer-urls=https://0.0.0.0:2380 --listen-metrics-urls=https://0.0.0.0:9978
2022-01-09T03:57:03.308976757+00:00 stderr F 2022-01-09 03:57:03.308768 I | pkg/flags: recognized and used environment variable ETCD_CIPHER_SUITES=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
2022-01-09T03:57:03.309235485+00:00 stderr F 2022-01-09 03:57:03.309168 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/var/lib/etcd
2022-01-09T03:57:03.309603795+00:00 stderr F 2022-01-09 03:57:03.309534 I | pkg/flags: recognized and used environment variable ETCD_ELECTION_TIMEOUT=1000
2022-01-09T03:57:03.309798101+00:00 stderr F 2022-01-09 03:57:03.309735 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_PPROF=true
2022-01-09T03:57:03.309980129+00:00 stderr F 2022-01-09 03:57:03.309923 I | pkg/flags: recognized and used environment variable ETCD_HEARTBEAT_INTERVAL=100
2022-01-09T03:57:03.310133572+00:00 stderr F 2022-01-09 03:57:03.310076 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=existing
2022-01-09T03:57:03.310302248+00:00 stderr F 2022-01-09 03:57:03.310258 I | pkg/flags: recognized and used environment variable ETCD_NAME=rhcpm01.osc01.nix.mds.xyz
2022-01-09T03:57:03.310544947+00:00 stderr F 2022-01-09 03:57:03.310477 I | pkg/flags: recognized and used environment variable ETCD_QUOTA_BACKEND_BYTES=7516192768
2022-01-09T03:57:03.310725105+00:00 stderr F 2022-01-09 03:57:03.310666 W | pkg/flags: unrecognized environment variable ETCD_INITIAL_CLUSTER=
2022-01-09T03:57:03.310872490+00:00 stderr F 2022-01-09 03:57:03.310814 W | pkg/flags: unrecognized environment variable ETCD_IMAGE=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:967456e3fbb25822d51fa7f919f2ea06a0fe48f9a152bb43d3c4d9ccfa6863c8
2022-01-09T03:57:03.311804900+00:00 stderr F [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2022-01-09T03:57:03.312447220+00:00 stderr F 2022-01-09 03:57:03.312186 I | etcdmain: etcd Version: 3.4.9
2022-01-09T03:57:03.312951167+00:00 stderr F 2022-01-09 03:57:03.312713 I | etcdmain: Git SHA: ac513b1
2022-01-09T03:57:03.313456394+00:00 stderr F 2022-01-09 03:57:03.313199 I | etcdmain: Go Version: go1.12.12
2022-01-09T03:57:03.313856188+00:00 stderr F 2022-01-09 03:57:03.313714 I | etcdmain: Go OS/Arch: linux/amd64
2022-01-09T03:57:03.314169682+00:00 stderr F 2022-01-09 03:57:03.314123 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8
2022-01-09T03:57:03.314465634+00:00 stderr F 2022-01-09 03:57:03.314419 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2022-01-09T03:57:03.314556318+00:00 stderr F [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2022-01-09T03:57:03.314692532+00:00 stderr F 2022-01-09 03:57:03.314653 I | embed: peerTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-rhcpm01.osc01.nix.mds.xyz.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-rhcpm01.osc01.nix.mds.xyz.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = true, crl-file =
2022-01-09T03:57:03.316397899+00:00 stderr F 2022-01-09 03:57:03.316346 I | embed: pprof is enabled under /debug/pprof
2022-01-09T03:57:03.316718290+00:00 stderr F 2022-01-09 03:57:03.316672 I | embed: name = rhcpm01.osc01.nix.mds.xyz
2022-01-09T03:57:03.316802311+00:00 stderr F 2022-01-09 03:57:03.316772 I | embed: data dir = /var/lib/etcd
2022-01-09T03:57:03.316870825+00:00 stderr F 2022-01-09 03:57:03.316843 I | embed: member dir = /var/lib/etcd/member
2022-01-09T03:57:03.316935712+00:00 stderr F 2022-01-09 03:57:03.316909 I | embed: heartbeat = 100ms
2022-01-09T03:57:03.317001989+00:00 stderr F 2022-01-09 03:57:03.316976 I | embed: election = 1000ms
2022-01-09T03:57:03.317067282+00:00 stderr F 2022-01-09 03:57:03.317041 I | embed: snapshot count = 100000
2022-01-09T03:57:03.317145004+00:00 stderr F 2022-01-09 03:57:03.317115 I | embed: advertise client URLs = https://10.0.0.106:2379
2022-01-09T03:57:03.317216895+00:00 stderr F 2022-01-09 03:57:03.317189 I | embed: initial advertise peer URLs = https://10.0.0.106:2380
2022-01-09T03:57:03.317376984+00:00 stderr F 2022-01-09 03:57:03.317277 I | embed: initial cluster =
2022-01-09T03:57:05.539300775+00:00 stderr F 2022-01-09 03:57:05.539167 C | etcdmain: walpb: crc mismatch
[root@rhcpm01 etcd]# pwd
/var/log/pods/openshift-etcd_etcd-rhcpm01.osc01.nix.mds.xyz_bfb6a49c-8a50-482e-bbea-d5c9d36a5703/etcd
[root@rhcpm01 etcd]#

I’m guessing ( Please confirm if I’m wrong? ) that once ETCD is working, kubelet will see the certificates are expired and recreate these, regardless if these certs are well past their expiration date. So I’m leaning towards the ETCD issue as the blocking problem.

I was contemplating recreating the cluster, that’s a very good point. However, I’m more interested in figuring out and bringing things back up since it’ll be easier to do so next time knowing a solution, a few more bits can be learned along the way and there will be more opportunity for possibly improving on what is there based on what is found. And there is time.

Likewise, interested in how fragile the Kubernetes w/ OpenShift is under things like these events causing possible corruption so interested where and how things could break and what fixes are possible to get things back up.

On a slightly separate note, if there is a number of applications on the cluster, some of which store data and settings, those I’m guessing will be lost unless I have some external DB or other external target where everything was kept. In other words, how would I get the data out if everything was fully contained within the Kubernetes cluster and now it’s down?

etcd has it own set of certificates for members to communicate securely. According to documentation, these expire after 3 years, so they are unlikely to be expired, but it could be worth to check them (just in case).

Red Hat recomends to configure an NTP server to avoid time differences on the clock of the nodes. If you were able to “turn back the clock”, you should check it your server’s clocks are in synch. If the difference is above a certain threshold, it may cause issues trusting the certificates.

The apiserver and the etcd are static pods and I believe they should be started by the kubelet: kubeadm - Implementation details - Constants and well-known values and paths (Later in this page there is a list of the steps taken by the kubeadm preflight check followed by the list of certificates it creates (and its location); this may be useful debugging your cluster.)

The kubelet reads the manifests for the static pods from the filesystem, not the etcd, so failing to connect to the etcd database cannot be the cause of the kubelet not starting. I’ve been able to find this document on how the kubelet starts from the kops documentation: kubelet start

Kubelet starts up, starts (and restarts) all the containers in /etc/kubernetes/manifests.

It also tries to contact the API server (which the master kubelet will itself eventually start), register the node. Once a node is registered, kube-controller-manager will allocate it a PodCIDR, which is an allocation of the k8s-network IP range. kube-controller-manager updates the node, setting the PodCIDR field. Once kubelet sees this allocation, it will set up the local bridge with this CIDR, which allows docker to start. Before this happens, only pods that have hostNetwork will work - so all the “core” containers run with hostNetwork=true.

This is interesting, as it seems that if the node is not able to register, the kube-controller-manager will not allocate the PodCIDR, which I guess will make the pods unable to get an IP and communicate with each other.

It’s not clear what happens in a control plane node, but acording to

[the kubelet] tries to contact the API server (which the master kubelet will itself eventually start)

seems that the kubelet starting the api-server in a “master” node should be able to register the “local node” in the apiserver started by itself.

If it’s not happening, something may be preventing the kubelet to communicate with the API Server… I think that the kubelet authenticates to the apiserver using a certificate signed by the cluster’s CA; maybe when the cluster started (whith the clock set in the present, aka, 2022), it “flagged” the kubelet’s certificate as expired, and it has a record of it being expired. If that’s the way it works, setting the clock in the past will not “un-expire” the certificate…

As I don’t know how the apiserver managed expired certificates, the best course of action would be assuming the certificate is expired and (try to) follow the official documentation Troubleshooting - Kubelet client certificate rotation fails. You could check if that’s the way to go by observing the logs of the kube-apiserver:

you might see errors such as x509: certificate has expired or is not yet valid in kube-apiserver logs.

The problem is that the process described in the documentation requires launching commands in a “healthy” control plane node, and you don’t have one. But you should be able to manually create a certificate for the kubelet, sign it with the CA certificate and start the kubelet using this “trusted” certificate…

From the logs you provide, the etcd member seems to recognize that it’s part of an existing cluster and it tries to connect to the other members of the cluster:

failed to create etcd client, but the server is already initialized as member "rhcpm01.osc01.nix.mds.xyz"`

…but connecting to other etcd cluster members fail;

Error while dialing dial tcp 10.0.0.107:2379: connect: connection refused
...
ALL_ETCD_ENDPOINTS=https://10.0.0.106:2379,https://10.0.0.108:2379,https://10.0.0.107:2379

The message Waiting for ports 2379, 2380 and 9978 to be released comes from etcd/pod.yaml from the cluster-etcd-operator source code.

It is consistent with the pod network being down (althought it’s not conclusive).

I would focus on checking the kubelet and making sure it starts (or tries to) and look into the kubelet’s logs to see why it fails.

At the end fo the log you provide there’s a message about a crc mismatch from the etcd:

etcdmain: walpb: crc mismatch

This may be BAD NEWS, according to Discussion: etcd: walpb: crc mismatch or ETCD data gets corrupted with error "read wal error (walpb: crc mismatch)… There is a similar message in etcd fails with error “C | etcdserver: read wal error (wal: crc mismatch) and cannot be repaired” (requires access to Red Hat Support Site) and points in the same direction:

  • This issue might occur due to a CRC mismatch that can happen if there is a bit rot on disk or a file-system corruption. From the etcd logs , it was seen that the WAL file is broken which means that the WAL file has been corrupted by filesystem issues or etc.

You mentioned that the cluster you are trying to recover failed some time ago; do you remember if there was an outage or some other unclean shutdown that could affect the database?

Likewise, interested in how fragile the Kubernetes w/ OpenShift is under things like these events causing possible corruption so interested where and how things could break and what fixes are possible to get things back up.

Have you ever heard of this guy, Murphy :wink: ? That’s why backups and disaster recovery plans are not optional. Things fail; as they say, it’s not a matter of “if”, but “when”.

As you have access to the ETCD_DATA_DIR=/var/lib/etcd, you may be able to copy the contents of the folder and try to recreate (or recover) the manifests of everything you had deployed on the cluster. Maybe you could use Backing up etcd data as a reference.

Applications running in your cluster store data in volumes. Depending on your storage backend, your data might be already safe somewhere outside the cluster. If your volumes were “local”, it will depend on how data is stored in your application’s volumes. If you deployed a containerized NFS server, pinning it to an “storage node”, for example, your NFS-exported folders will be located as a regular folder in this “storage node” filesystem.

With CoreOs this might be a little more complicated than that, but the point is that it MAY be possible to get data back just copying folders; containers are just “regular” linux procecesses isolated from other processes; if you had an application that needed a license.key file to run and the only place where it was “saved” was in one of the containers deployed on the “failed” cluster, you may be able to get it deep diving in your worker’s filesystem, “navigating” to the folder where the “container” files would be and copying it to another linux box.

To be safe, I would start backing up the etcd database and all your applications.

I would recommend using Velero for backing up your applications. It was developed by Joe Beda (one of the “creators” of Kubernetes) at Heptio (accquired by VMware), but it’s free and opensource. Red Hat uses it to backup and migrate clusters as part of the Migration Toolkit for Containers and its an easy to use and solid solution. Most tutorials and guides on how to configure Velero use AWS S3 buckets although other options are available; if you want to try it, the easiest way would be using MinIO to have S3 API-compatible on-prem “buckets”.

Best regards,

Xavi

Hey Xavi, you know where I’ll be for the next few days. :slight_smile: Reading all the material thoroughly!

Once more, you’re surpassed my expectations! This is awesome! Thanks very much for all this! Updates to follow!

1 Like

This is a 3 master and 3 worker Kubernetes w/ OpenShift cluster. The storage is SAN and that’s where the VM’s sit. It’s a LAB cluster so on-the-cheap. At some point the SAN storage kernel panicked, causing the SAN storage to just disappear from the running VM. So in all likelihood, while ETCD was writing things to disk, the write attempt was probably cut off half way through. Seems this is the Achille’s Heal of ETCD as it’s used in other solutions as well with the same disastrous when storage fails consequences since ETCD is very much storage dependent requiring near perfection of the storage it is using.

Noticing that if some of the nodes were on DAS instead of SAN, the same outage would be unlikely to cause corruption. Caching would likely ensure some consistency in the writes.

Challenge I’m facing, is to find the ETCD wal files and storage locations without using oc, since oc is in turn dependent on ETCD which depends on the storage, making oc also storage dependent. :wink:

[root@rhcpm01 ~]# crictl ps -a|grep -Ei "container|etcd"
CONTAINER           IMAGE                                                              CREATED              STATE               NAME                                          ATTEMPT             POD ID
33750cde6aac3       91965ed632cd91f053532cefc7d8b853a2bb5b8b6d1768c8ad473fe404b9338d   4 minutes ago        Exited              etcd                                          25                  e39b70a1c5ee8
061f360006c4d       91965ed632cd91f053532cefc7d8b853a2bb5b8b6d1768c8ad473fe404b9338d   2 hours ago          Running             etcd-metrics                                  0                   e39b70a1c5ee8
2091c946709d8       91965ed632cd91f053532cefc7d8b853a2bb5b8b6d1768c8ad473fe404b9338d   2 hours ago          Running             etcdctl                                       0                   e39b70a1c5ee8
755376b423810       91965ed632cd91f053532cefc7d8b853a2bb5b8b6d1768c8ad473fe404b9338d   2 hours ago          Exited              etcd-resources-copy                           0                   e39b70a1c5ee8
f28be6be80fae       91965ed632cd91f053532cefc7d8b853a2bb5b8b6d1768c8ad473fe404b9338d   2 hours ago          Exited              etcd-ensure-env-vars                          0                   e39b70a1c5ee8
[root@rhcpm01 ~]#
[root@rhcpm01 ~]#
[root@rhcpm01 ~]# crictl exec -i -t 2091c946709d8 bash  ^C
[root@rhcpm01 ~]#
[root@rhcpm01 ~]# crictl logs 2091c946709d8
/bin/bash: line 1:     8 Terminated              sleep infinity
/bin/bash: line 1: TERM: command not found
/bin/bash: line 1:     7 Terminated              sleep infinity
[root@rhcpm01 ~]#
[root@rhcpm01 ~]#
[root@rhcpm01 ~]# etcdctl member list -w table
-bash: etcdctl: command not found
[root@rhcpm01 ~]#
[root@rhcpm01 ~]#
[root@rhcpm01 ~]# crictl exec -i -t 2091c946709d8 bash
[root@rhcpm01 /]# etcdctl member list -w table
{"level":"warn","ts":"2022-04-03T21:45:50.186Z","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-a59d3c7f-5b55-4012-a6bb-f54ddb0bb3e1/10.0.0.106:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial tcp 10.0.0.107:2379: connect: connection refused\""}
Error: context deadline exceeded
[root@rhcpm01 /]#

Will be looking to access the etcd docker (er crictl) file system instead.

Cheers,