Additional questions about cgroup driver and how to change driver

Hi,

I have some questions about cgroup driver.

  1. can I use different cgroup drivers on master and worker nodes? Does it have any side effects?
  2. I lately switched worker nodes to recommended systemd driver (kubernetes 1.14.0.3) by draining and deleting nodes and after finishing I joined the nodes agin.
  3. I have a single master. How can I change cgroup driver to systemd on master? I have tried the following:

/etc/docker/daemon.json:

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m",
    "max-file": "2"
  },
  "exec-opts": ["native.cgroupdriver=systemd"]
}

/etc/sysconfig/kubelet:

KUBELET_EXTRA_ARGS=--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice

After that I rebooted the whole master server, but kubelet will not start with this error in journal:

Sep 12 09:07:01 kubernetes01 kubelet[2342]: I0912 09:07:01.843809    2342 docker_service.go:258] Docker Info: &{ID:EHMO:OUX6:PWLK:WXFB:DDCH:AD7P:ZJAW:ERXP:I3WS:E3MI:CWZD:MJ5T Containers:40 ContainersRunning:0 ContainersPaused:0 ContainersStopped:40 Images:387 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:37 SystemTime:2019-09-12T09:07:01.829064652+02:00 LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:3.10.0-957.27.2.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0003eb180 NCPU:2 MemTotal:4142743552 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy:http://proxy.internal.dtpublic.de:8080/ NoProxy:localhost,127.0.0.1,qdetjt,qdetju,kubernetes01,kubernetes02,kubernetes03,docker-registry Name:kubernetes01 Labels:[] ExperimentalBuild:false ServerVersion:18.09.6 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bb71b10fd8f58240ca47fbb579b9d1028eea7c84 Expected:bb71b10fd8f58240ca47fbb579b9d1028eea7c84} RuncCommit:{ID:2b18fe1d885ee5083ef9f0838fee39b62d653e30 Expected:2b18fe1d885ee5083ef9f0838fee39b62d653e30} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default]}
Sep 12 09:07:01 kubernetes01 kubelet[2342]: F0912 09:07:01.843879    2342 server.go:265] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
Sep 12 09:07:01 kubernetes01 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Sep 12 09:07:01 kubernetes01 systemd[1]: Unit kubelet.service entered failed state.
Sep 12 09:07:01 kubernetes01 systemd[1]: kubelet.service failed.

Thanks, Andreas