• 카테고리

    질문 & 답변
  • 세부 분야

    데브옵스 · 인프라

  • 해결 여부

    미해결

안녕하세요 kubelet 실행시 cgroup driver가 cgroupfs로 나옵니다

20.06.29 10:09 작성 조회수 4.77k

1

failed to run Kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"

공식문서에는 이 위치에 파일을 찾으라해서 찾았는데

/etc/systemd/system/kubelet.service 위치에 없어서 찾다보니 

vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 에

Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd“ 추가 했는데도 

kubelet cgroup driver가 cgroupfs로 되는 문제가 있는데 어떻게 해야되나요?

kubelet log 입니다

kubelet 실행시 로그입니다 

I0629 10:26:20.043880    1157 server.go:417] Version: v1.18.4

I0629 10:26:20.044141    1157 plugins.go:100] No cloud provider specified.

W0629 10:26:20.044171    1157 server.go:560] standalone mode, no API client

W0629 10:26:20.052017    1157 container_manager_linux.go:912] CPUAccounting not enabled for pid: 1157

W0629 10:26:20.052031    1157 container_manager_linux.go:915] MemoryAccounting not enabled for pid: 1157

W0629 10:26:20.144960    1157 server.go:474] No api server defined - no events will be sent to API server.

I0629 10:26:20.144996    1157 server.go:647] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /

I0629 10:26:20.145378    1157 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: []

I0629 10:26:20.145394    1157 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}

I0629 10:26:20.145511    1157 topology_manager.go:126] [topologymanager] Creating topology manager with none policy

I0629 10:26:20.145520    1157 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none policy

I0629 10:26:20.145526    1157 container_manager_linux.go:306] Creating device plugin manager: true

I0629 10:26:20.145904    1157 client.go:75] Connecting to docker on unix:///var/run/docker.sock

I0629 10:26:20.145924    1157 client.go:92] Start docker client with request timeout=2m0s

W0629 10:26:20.153507    1157 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"

I0629 10:26:20.153534    1157 docker_service.go:238] Hairpin mode set to "hairpin-veth"

W0629 10:26:20.153647    1157 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d

I0629 10:26:20.159266    1157 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op

I0629 10:26:20.174650    1157 docker_service.go:258] Docker Info: &{ID:T2I3:EWJ7:VEJY:RXBX:DVXY:MVVL:GZNX:YXOU:PIPL:QAEP:EEHU:VNPV Containers:47 ContainersRunning:16 ContainersPaused:0 ContainersStopped:31 Images:56 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:109 OomKillDisable:true NGoroutines:104 SystemTime:2020-06-29T10:26:20.161085127+09:00 LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:3.10.0-1062.18.1.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0002d05b0 NCPU:4 MemTotal:8168255488 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:k8shost Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[]}

F0629 10:26:20.174747    1157 server.go:274] failed to run Kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"

답변 5

·

답변을 작성해보세요.

1

우핫님의 프로필

우핫

질문자

2020.06.29

답변 감사합니다 

systemctl status kubelet  명령어 실행해보니 이렇게 나오고 

말씀해주신 디렉토리 위치에도 10-kubeadm.conf 파일이 있었습니다 ㅠ 

감사합니다 다시 밀고 차근차근 해야될거같습니다 

답변 정말감사합니다 ㅎ

[root@k8shost kubelet.service.d]# systemctl status kubelet

* kubelet.service - kubelet: The Kubernetes Node Agent

   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)

  Drop-In: /etc/systemd/system/kubelet.service.d

           `-10-kubeadm.conf

   Active: active (running) since ¿ù 2020-06-29 18:17:48 KST; 5ms ago

     Docs: http://kubernetes.io/docs/

 Main PID: 14214 (kubelet)

    Tasks: 1

   Memory: 364.0K

   CGroup: /system.slice/kubelet.service

           `-14214 /var/lib/minikube/binaries/v1.18.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-...

1

일단 이상한 점들이 몇개 있는데요.

systemctl status kubelet을 해보면

10-kubeadm.conf 파일은 /usr/lib/systemd/system/kubelet.service.d  위치에 있지 않나요?

(저도 최신 버전으로 설치된 설정을 확인해 봤고, 주신 로그를 보면  1.18.5 최신 버전인것 같은데, 해당 파일 위치가 다른점이 일단 달라 보이고요)

그래도 systemctl status kubelet를 해봤을때 해당 파일의 경로가 일치한다면 괜찮아 보이긴 합니다.

----

그리고 minikube start --driver=none 명령을 쓰는 이유가 있나요? 쿠버네티스 설치할때 minikube를 별도로 사용할 필요가 없거든요. 

----

해당 서버에 이것저것 설치를 하고 지우고 하다가 설정들이 꼬인게 아닌지 싶기도 하네요.

1

우핫님의 프로필

우핫

질문자

2020.06.29

답변 감사합니다 !   블로그에 올려주신 내용순서 대로 따라했는데 잘 됬었습니다

아 가이드는 

내PC + VirtualBox (Network: Bridge)

이거 보고 하였습니다

worker노드랑 master 노드 생성해 보려고 kubeadm reset 후 다시

kubeadm init --pod-network-cidr=20.96.0.0/12 명령어를 실행하니 

이 로그(아래에 적어 두었습니다)가 나와서

kubelet 에 문제가 있어보여 해보니 cgroup driver 설정이 다르더라구요 그래서 맞추고 다시해도

같은에러가 나와서 질문하게 되었습니다.

kubelet  

/etc/systemd/syste/kubelet.service.d/10-kubeadm.conf   에서 systemd 수정후 minikube start --driver=none 

실행후 확인하면 다시 cgroupfs 로 바뀌는거 같습니다

---------------------------------------------------------------------------------

[root@k8shost etcd]# kubeadm init --pod-network-cidr=20.96.0.0/12

W0629 14:31:27.403504   13298 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]

[init] Using Kubernetes version: v1.18.5

[preflight] Running pre-flight checks

[preflight] Pulling images required for setting up a Kubernetes cluster

[preflight] This might take a minute or two, depending on the speed of your internet connection

[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Starting the kubelet

[certs] Using certificateDir folder "/etc/kubernetes/pki"

[certs] Generating "ca" certificate and key

[certs] Generating "apiserver" certificate and key

[certs] apiserver serving cert is signed for DNS names [k8shost kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.53]

[certs] Generating "apiserver-kubelet-client" certificate and key

[certs] Generating "front-proxy-ca" certificate and key

[certs] Generating "front-proxy-client" certificate and key

[certs] Generating "etcd/ca" certificate and key

[certs] Generating "etcd/server" certificate and key

[certs] etcd/server serving cert is signed for DNS names [k8shost localhost] and IPs [192.168.0.53 127.0.0.1 ::1]

[certs] Generating "etcd/peer" certificate and key

[certs] etcd/peer serving cert is signed for DNS names [k8shost localhost] and IPs [192.168.0.53 127.0.0.1 ::1]

[certs] Generating "etcd/healthcheck-client" certificate and key

[certs] Generating "apiserver-etcd-client" certificate and key

[certs] Generating "sa" key and public key

[kubeconfig] Using kubeconfig folder "/etc/kubernetes"

[kubeconfig] Writing "admin.conf" kubeconfig file

[kubeconfig] Writing "kubelet.conf" kubeconfig file

[kubeconfig] Writing "controller-manager.conf" kubeconfig file

[kubeconfig] Writing "scheduler.conf" kubeconfig file

[control-plane] Using manifest folder "/etc/kubernetes/manifests"

[control-plane] Creating static Pod manifest for "kube-apiserver"

[control-plane] Creating static Pod manifest for "kube-controller-manager"

W0629 14:31:32.543009   13298 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"

[control-plane] Creating static Pod manifest for "kube-scheduler"

W0629 14:31:32.544957   13298 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"

[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

[kubelet-check] Initial timeout of 40s passed.

[kubelet-check] It seems like the kubelet isn't running or healthy.

[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.

1

일단 해당 에러를 검색해 보니 아래와 같은 해결책이 나오네요.

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/#kubeadm-blocks-waiting-for-control-plane-during-installation

설치 중 제어 평면을 기다리는 kubeadm 블록

해당 통지하는 경우 kubeadm init다음 줄을 인쇄 한 후 컴퓨터가 :

[apiclient] Created API client, waiting for the control plane to become ready

여러 가지 문제로 인해 발생할 수 있습니다. 가장 일반적인 것은 다음과 같습니다.

  • 네트워크 연결 문제. 계속하기 전에 컴퓨터가 네트워크에 완전히 연결되어 있는지 확인하십시오.

  • kubelet의 기본 cgroup 드라이버 구성은 Docker에서 사용하는 것과 다릅니다. 시스템 로그 파일 (예 :)을 확인 /var/log/message하거나의 출력을 검사하십시오 journalctl -u kubelet다음과 같은 것이 보이면 :

    error: failed to run Kubelet: failed to create kubelet:
    misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"
    

    cgroup 드라이버 문제를 해결하는 일반적인 두 가지 방법이 있습니다.

  1. 여기 지침에 따라 Docker를 다시 설치 하십시오 .

  2. Docker cgroup 드라이버를 수동으로 일치하도록 kubelet 구성을 변경하십시오 . 마스터 노드에서 kubelet이 사용하는 cgroup 드라이버 구성을 참조하십시오.

  • 제어 평면 도커 컨테이너가 충돌하거나 멈 춥니 다. 를 실행 docker ps하여 각 컨테이너를 실행하고 조사 하여이를 확인할 수 있습니다 docker logs.

0

안녕하세요. 

어떤 작업을 하다가 발생한 에러인가요?

설치 중에 발생한 에러라면 어떤 가이드 내용으로 설치 중이신지?