• 카테고리

    질문 & 답변
  • 세부 분야

    데브옵스 · 인프라

  • 해결 여부

    미해결

kubeadm reset 후 init 시 오류

20.10.22 16:29 작성 조회수 786

0

kubeadm reset 후 다시 init 시 아래와 같이 나오고

[root@itsm-ci ~]# kubeadm init
W1022 16:20:59.529935    3584 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: missing optional cgroups: pids
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [itsm-ci kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.73]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [itsm-ci localhost] and IPs [10.0.0.73 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [itsm-ci localhost] and IPs [10.0.0.73 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

로그를 살펴봐도 딱히 알수가 없어서 질문해요.

I1022 16:12:24.517130   28853 server.go:411] Version: v1.19.3
I1022 16:12:24.517630   28853 server.go:831] Client rotation is on, will bootstrap in background
I1022 16:12:24.522119   28853 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
I1022 16:12:24.524175   28853 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
W1022 16:12:24.652251   28853 nvidia.go:61] NVIDIA GPU metrics will not be available: no NVIDIA devices found
I1022 16:12:24.683063   28853 server.go:640] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
I1022 16:12:24.683903   28853 container_manager_linux.go:276] container manager verified user specified cgroup-root exists: []
I1022 16:12:24.683963   28853 container_manager_linux.go:281] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:d
I1022 16:12:24.684185   28853 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
I1022 16:12:24.684204   28853 container_manager_linux.go:311] [topologymanager] Initializing Topology Manager with none policy
I1022 16:12:24.684216   28853 container_manager_linux.go:316] Creating device plugin manager: true
I1022 16:12:24.684341   28853 client.go:77] Connecting to docker on unix:///var/run/docker.sock
I1022 16:12:24.684363   28853 client.go:94] Start docker client with request timeout=2m0s
W1022 16:12:24.698575   28853 docker_service.go:565] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
I1022 16:12:24.698631   28853 docker_service.go:241] Hairpin mode set to "hairpin-veth"
W1022 16:12:24.698810   28853 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
W1022 16:12:24.705122   28853 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
I1022 16:12:24.705222   28853 docker_service.go:256] Docker cri networking managed by cni
W1022 16:12:24.705328   28853 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
I1022 16:12:24.738463   28853 docker_service.go:261] Docker Info: &{ID:QG4V:H44H:7F2H:7W5N:DH3H:YTPU:VC7A:NBXY:HDOO:OZXV:NZYE:6SSC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersSto
ror: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fba4e9a7d01810a393d5d25a3621dc101981175 Expect
Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.]}
I1022 16:12:24.738619   28853 docker_service.go:274] Setting cgroupDriver to systemd
I1022 16:12:24.762240   28853 remote_runtime.go:59] parsed scheme: ""
I1022 16:12:24.762276   28853 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
I1022 16:12:24.762325   28853 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
I1022 16:12:24.762351   28853 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1022 16:12:24.762445   28853 remote_image.go:50] parsed scheme: ""
I1022 16:12:24.762463   28853 remote_image.go:50] scheme "" not registered, fallback to default scheme
I1022 16:12:24.762483   28853 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
I1022 16:12:24.762495   28853 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1022 16:12:24.762553   28853 kubelet.go:261] Adding pod path: /etc/kubernetes/manifests
I1022 16:12:24.762627   28853 kubelet.go:273] Watching apiserver
E1022 16:12:24.764326   28853 reflector.go:127] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.0.0.73:6443/api/v1/pods?fieldSelector=spec.nodeName%3Ditsm-ci&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused
E1022 16:12:24.765110   28853 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.73:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused
E1022 16:12:24.765661   28853 reflector.go:127] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Ditsm-ci&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused

계속  connection refused 만 나오는데 원인을 모르겠습니다.

centOS7 에서 진행중이고

selinux permitive,

방화벽은 아예 off 되어  있고

swapoff 도 되어 있습니다.

동영상처럼 진행해도 잘 안되네요.

답변 2

·

답변을 작성해보세요.

0

kbh님의 프로필

kbh

질문자

2020.10.25

안내해주신 url 보고 이미 설치한 내용입니다. 강의 영상에도 이미 나와 있구요.

설치 방법도 kubeadm, kubelet, kubectl 이 OS 별로 나뉜거 말곤 같은거 같구요. 

이전 설치 흔적이 전혀 없는 깨끗한 서버에서는 저도 정상 동작합니다.

kubeadm reset 이후 init 시도시 발생하는 문제입니다. 

방화벽은 아예 내려놓았고 포트들은 grep으로 확인이 안됩니다. 

init 중에 실패한  내용이니 당연하겠지만 6443 사용하는 api서비스가 아예 안올라와 잇는것 같습니다. 

0

gasbugs님의 프로필

gasbugs

2020.10.25

안녕하세요 강사 최일선입니다.

centos는 설치방법이 좀 다릅니다.

따라서 다음 링크의 내용을 자세히 참고해서 설치하시기를 권장 드립니다.

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

센토스의 경우 일부 설정을 좀더 추가로 건드려줘야하는 것으로 기억하는데 제가 설치했을때는 정상 동작하였습니다.

쿠버네티스가 처음이시라면 가능한 수업에서와 같이 우분투 기반으로 진행해주시면 좋을 듯 합니다.

netstat -antp 명령을 사용해 포트가 잘 열려있는지도 보시면 좋을 듯합니다!

감사합니다.