qwedcxzas80779
@qwedcxzas80779
Reviews Written
9
Average Rating
5.0
Posts
Q&A
์ฟ ๋ฒ๋คํฐ์ค ํด๋ฌ์คํฐ ๊ตฌ์ถ์ CRI ์๋ฌ
kubelet ์คํ์ systemd๋ก ํ๋ฉด kubelet cgroup ์systemd๋กํ๋ฉด ๋ค์ cgroupfs๋ก ๋ฐ๋๋ฉด์ ์๋ฌ๊ฐ ๋๋ ๊ฑฐ๋ผ ํ๋จํด์ /etc/systemd/system/kubelet.service.d/10-kubeadm.conf ํ์ผ์ ์์ ํ์์ต๋๋ค. ์ผ๋จ cgroupfs ๋ก ๋์ปค๋ kubenetes cgroupfs๋ก ๋ง์ถฐ์ฃผ์์ต๋๋ค ๊ทธ๋ฆฌ๊ณ kubeadm init --pod-network-cidr=20.96.0.0/12 vm ๋๋ ค์ network ๋ธ๋ฆฟ์ง๋ก ๋ง๋ค์ด์ ์ด๋ ๊ฒ ์คํํ์์ต๋๋ค [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused. kubelet ๋ง cli์ ์น๋ฉด Failed to get system container stats for "/user.slice/user-0.slice/session-3180.scope": failed to get cgroup stats for "/user.slice/user-0.slice/session-3180.scope": failed to get container info for "/user.slice/user-0.slice/session-3180.scope": unknown container "/user.slice/user-0.slice/session-3180.scope" ๋ก๊ทธ๋ณด๋ ์ด๋ ๊ฒ ๋์ค๋๋ฐ ์ด๊ฒ๋ ๋ฌธ์ ๊ฐ ์๋์ง ๊ตฌ๊ธ๋งํด๋ ์๋์์ ์ง๋ฌธํฉ๋๋ค
- 0
- 5
- 3.1K
Q&A
์ฟ ๋ฒ๋คํฐ์ค ํด๋ฌ์คํฐ ๊ตฌ์ถ์ CRI ์๋ฌ
centos 7.8 version ์ ๋๋ค ๋ง์คํฐ,์์ปค ๋ ธ๋ ๊ฐ์ ๋ฒ์ ์ ๋๋ค . ๋ง์คํฐ ์์ปค os centos7.8 centos7.8 docker 19.03.12 19.03.12 minikube 1.9.0 1.11 kubectl 1.18.4 1.18.4 kubelet 1.18.4 1.18.4 ๋ฒ์ ์ฌ์ฉํ๊ณ ์์ต๋๋ค ์ด๋ค ์ผํ ์ค ์ด๋ค๊ฑฐ ๋ง์ํ์๋์ง ์ ๋ชจ๋ฅด๊ฒ ์ด์ ์ผ๋จ ์ด๋ ๊ฒ ์ฌ๋ฆฝ๋๋ค ์ ์ ๊ณต๋ถํ์๋ minikube ๋ฅผ์ฌ์ฉํ์๋๋ฐ ์ด๊ฒ๋ ๋ฌธ์ ๊ฐ๋ฅ์ฑ์ด ์๋์ง ๊ถ๊ธํฉ๋๋ค vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf ์ฌ๊ธฐ ์ค์ ํ์ผ์ ์์ ํด๋ ๋ค์ kubelet ์ค์ ์ด ๋ค์๋์๊ฐ์ ๋ฌธ์ ์ธ๊ฑฐ๊ฐ์๋ฐ cgroupfs ๋ฅผ systemd๋ก ์์ ํด๋ ๋ค์ cgroupfs๋ก ๋์๊ฐ๋ฉด์ kubelet ์๋ฌ๊ฐ ๋๋๊ฑฐ๊ฐ์ต๋๋ค
- 0
- 5
- 3.1K
Q&A
์๋ ํ์ธ์ kubelet ์คํ์ cgroup driver๊ฐ cgroupfs๋ก ๋์ต๋๋ค
๋ต๋ณ ๊ฐ์ฌํฉ๋๋ค systemctl status kubelet ๋ช ๋ น์ด ์คํํด๋ณด๋ ์ด๋ ๊ฒ ๋์ค๊ณ ๋ง์ํด์ฃผ์ ๋๋ ํ ๋ฆฌ ์์น์๋ 10-kubeadm.conf ํ์ผ์ด ์์์ต๋๋ค ใ ๊ฐ์ฌํฉ๋๋ค ๋ค์ ๋ฐ๊ณ ์ฐจ๊ทผ์ฐจ๊ทผ ํด์ผ๋ ๊ฑฐ๊ฐ์ต๋๋ค ๋ต๋ณ ์ ๋ง๊ฐ์ฌํฉ๋๋ค ใ [root@k8shost kubelet.service.d]# systemctl status kubelet * kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/kubelet.service.d `-10-kubeadm.conf Active: active (running) since ยฟรน 2020-06-29 18:17:48 KST; 5ms ago Docs: http://kubernetes.io/docs/ Main PID: 14214 (kubelet) Tasks: 1 Memory: 364.0K CGroup: /system.slice/kubelet.service `-14214 /var/lib/minikube/binaries/v1.18.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-...
- 1
- 5
- 5.4K
Q&A
์๋ ํ์ธ์ kubelet ์คํ์ cgroup driver๊ฐ cgroupfs๋ก ๋์ต๋๋ค
๋ต๋ณ ๊ฐ์ฌํฉ๋๋ค ! ๋ธ๋ก๊ทธ์ ์ฌ๋ ค์ฃผ์ ๋ด์ฉ์์ ๋๋ก ๋ฐ๋ผํ๋๋ฐ ์ ๋ฌ์์ต๋๋ค ์ ๊ฐ์ด๋๋ ๋ดPC + VirtualBox (Network: Bridge) ์ด๊ฑฐ ๋ณด๊ณ ํ์์ต๋๋ค worker๋ ธ๋๋ master ๋ ธ๋ ์์ฑํด ๋ณด๋ ค๊ณ kubeadm reset ํ ๋ค์ kubeadm init --pod-network-cidr=20.96.0.0/12 ๋ช ๋ น์ด๋ฅผ ์คํํ๋ ์ด ๋ก๊ทธ(์๋์ ์ ์ด ๋์์ต๋๋ค)๊ฐ ๋์์ kubelet ์ ๋ฌธ์ ๊ฐ ์์ด๋ณด์ฌ ํด๋ณด๋ cgroup driver ์ค์ ์ด ๋ค๋ฅด๋๋ผ๊ตฌ์ ๊ทธ๋์ ๋ง์ถ๊ณ ๋ค์ํด๋ ๊ฐ์์๋ฌ๊ฐ ๋์์ ์ง๋ฌธํ๊ฒ ๋์์ต๋๋ค. kubelet /etc/systemd/syste/kubelet.service.d/10-kubeadm.conf ์์ systemd ์์ ํ minikube start --driver=none ์คํํ ํ์ธํ๋ฉด ๋ค์ cgroupfs ๋ก ๋ฐ๋๋๊ฑฐ ๊ฐ์ต๋๋ค --------------------------------------------------------------------------------- [root@k8shost etcd]# kubeadm init --pod-network-cidr=20.96.0.0/12 W0629 14:31:27.403504 13298 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.18.5 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8shost kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.53] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8shost localhost] and IPs [192.168.0.53 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8shost localhost] and IPs [192.168.0.53 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0629 14:31:32.543009 13298 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0629 14:31:32.544957 13298 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
- 1
- 5
- 5.4K




