• 카테고리

    질문 & 답변
  • 세부 분야

    데브옵스 · 인프라

  • 해결 여부

    미해결

kubeadm init 후 반복적 재시작 오류

22.12.19 15:33 작성 22.12.19 15:49 수정 조회수 1.17k

0

안녕하세요 컴퓨팅 자원 부족으로 aws응 사용해서 kubeadm init 을 시도하여 성공은 했으나 kube-system의 컨트롤플래인 파드들이 지속적으로 재시작되는 에러가 발생하였습니다.

에러해결에 조그마한 실마리가되는 내용이라도 주시면 감사하겠습니다!!! ㅠㅠ

지금까지의 로그로 추정되는 부분은 kube-apiserver이 재시작되면서 -> 다른 모든 컨트롤 플래인들도 통신 불가로 재시작하는것으로 보입니다.

환경

os: ubuntu server 22.04 LTS (HVM), SSD Volume Type

cpu 2core / memory 4g

disk: root- 10g / 별도마운트 - 20g(docker, k8s사용 공간)

도커 버전

docker-ce=5:20.10.20~3-0~ubuntu-jammy \

docker-ce-cli=5:20.10.20~3-0~ubuntu-jammy \

containerd.io=1.6.8-1 \

docker-compose-plugin=2.12.0~ubuntu-jammy

쿠버네티스 버전

kubelet=1.26.0-00 \

kubeadm=1.26.0-00 \

kubelet=1.26.0-00

 

  1. docker와 k8s가 사용할 디스크공간인 /container 디렉토리에 20g디스크 별도로 마운트 이미 진행함

     

  2. /etc/docker/daemon.json 변경

    { "data-root": "/container/docker",

    "exec-opts": ["native.cgroupdriver=systemd"] }

  3. kubeadm init중 cri관련 오류 발생하여 검색하여 아래내용 주석처리로 해결

    vi /etc/containerd/config.toml 수정

    # disabled_plugins = ["cri"]

  4. 방화벽 해제

    sudo ufw disable

  5. iptable설정

    아래 링크대로 진행

    https://kubernetes.io/ko/docs/setup/production-environment/container-runtimes/#ipv4%EB%A5%BC-%ED%8F%AC%EC%9B%8C%EB%94%A9%ED%95%98%EC%97%AC-iptables%EA%B0%80-%EB%B8%8C%EB%A6%AC%EC%A7%80%EB%90%9C-%ED%8A%B8%EB%9E%98%ED%94%BD%EC%9D%84-%EB%B3%B4%EA%B2%8C-%ED%95%98%EA%B8%B0

  6. 아래링크 참고하여 kubernetes가 사용할 디스크공간 변경

    https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/#the-kubelet-drop-in-file-for-systemd

    vi /etc/default/kubelet 에 아래내용 추가

    KUBELET_EXTRA_ARGS=--root-dir="/container/k8s"

  7. kubeadm init

  8. kubeadm init --skip-phases=addon/kube-proxy

    kubeadm init으로 진행하였으나 계속된 reset후 다시 init 과정에서 kube-proxy 실패로 아래 에러로그는 kube-proxy단계를 스킵하는 명령어로 진행함

 

에러로그 첨부가 불가능 하여 아래 남깁니다!


ubuntu@ip-10-0-15-82:~$ kubectl get node

NAME STATUS ROLES AGE VERSION

master0 NotReady control-plane 7m10s v1.26.0

ubuntu@ip-10-0-15-82:~$ kubectl describe node master0

Name: master0

Roles: control-plane

Labels: beta.kubernetes.io/arch=amd64

beta.kubernetes.io/os=linux

kubernetes.io/arch=amd64

kubernetes.io/hostname=master0

kubernetes.io/os=linux

node-role.kubernetes.io/control-plane=

node.kubernetes.io/exclude-from-external-load-balancers=

Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock

node.alpha.kubernetes.io/ttl: 0

volumes.kubernetes.io/controller-managed-attach-detach: true

CreationTimestamp: Mon, 19 Dec 2022 06:03:24 +0000

Taints: node-role.kubernetes.io/control-plane:NoSchedule

node.kubernetes.io/not-ready:NoSchedule

Unschedulable: false

Lease:

HolderIdentity: master0

AcquireTime: <unset>

RenewTime: Mon, 19 Dec 2022 06:13:57 +0000

Conditions:

Type Status LastHeartbeatTime LastTransitionTime Reason Message

---- ------ ----------------- ------------------ ------ -------

MemoryPressure False Mon, 19 Dec 2022 06:13:52 +0000 Mon, 19 Dec 2022 06:03:21 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available

DiskPressure False Mon, 19 Dec 2022 06:13:52 +0000 Mon, 19 Dec 2022 06:03:21 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure

PIDPressure False Mon, 19 Dec 2022 06:13:52 +0000 Mon, 19 Dec 2022 06:03:21 +0000 KubeletHasSufficientPID kubelet has sufficient PID available

Ready False Mon, 19 Dec 2022 06:13:52 +0000 Mon, 19 Dec 2022 06:03:21 +0000 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized

Addresses:

InternalIP: 10.0.15.82

Hostname: master0

Capacity:

cpu: 2

ephemeral-storage: 20470Mi

hugepages-2Mi: 0

memory: 4015088Ki

pods: 110

Allocatable:

cpu: 2

ephemeral-storage: 19317915617

hugepages-2Mi: 0

memory: 3912688Ki

pods: 110

System Info:

Machine ID: f8b760a7c2274e0cb62621465dbcab92

System UUID: ec21d23a-a384-2b77-91df-2f108bd6b565

Boot ID: 12f267e0-d0f3-4193-b84a-d7dbcfd74b2b

Kernel Version: 5.15.0-1026-aws

OS Image: Ubuntu 22.04.1 LTS

Operating System: linux

Architecture: amd64

Container Runtime Version: containerd://1.6.8

Kubelet Version: v1.26.0

Kube-Proxy Version: v1.26.0

Non-terminated Pods: (4 in total)

Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age

--------- ---- ------------ ---------- --------------- ------------- ---

kube-system etcd-master0 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 9m12s

kube-system kube-apiserver-master0 250m (12%) 0 (0%) 0 (0%) 0 (0%) 10m

kube-system kube-controller-manager-master0 200m (10%) 0 (0%) 0 (0%) 0 (0%) 9m7s

kube-system kube-scheduler-master0 100m (5%) 0 (0%) 0 (0%) 0 (0%) 9m16s

Allocated resources:

(Total limits may be over 100 percent, i.e., overcommitted.)

Resource Requests Limits

-------- -------- ------

cpu 650m (32%) 0 (0%)

memory 100Mi (2%) 0 (0%)

ephemeral-storage 0 (0%) 0 (0%)

hugepages-2Mi 0 (0%) 0 (0%)

Events:

Type Reason Age From Message

---- ------ ---- ---- -------

Normal Starting 10m kubelet Starting kubelet.

Warning InvalidDiskCapacity 10m kubelet invalid capacity 0 on image filesystem

Normal NodeHasSufficientMemory 10m kubelet Node master0 status is now: NodeHasSufficientMemory

Normal NodeHasNoDiskPressure 10m kubelet Node master0 status is now: NodeHasNoDiskPressure

Normal NodeHasSufficientPID 10m kubelet Node master0 status is now: NodeHasSufficientPID

Normal NodeAllocatableEnforced 10m kubelet Updated Node Allocatable limit across pods

Normal RegisteredNode 9m37s node-controller Node master0 event: Registered Node master0 in Controller

Normal RegisteredNode 7m10s node-controller Node master0 event: Registered Node master0 in Controller

Normal RegisteredNode 4m57s node-controller Node master0 event: Registered Node master0 in Controller

Normal RegisteredNode 3m11s node-controller Node master0 event: Registered Node master0 in Controller

Normal RegisteredNode 25s node-controller Node master0 event: Registered Node master0 in Controller

 

 

 

ubuntu@ip-10-0-15-82:~$ kubectl get po -A

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system coredns-787d4945fb-bkhkm 0/1 Pending 0 6m20s

kube-system coredns-787d4945fb-d4t28 0/1 Pending 0 6m20s

kube-system etcd-master0 1/1 Running 20 (78s ago) 5m56s

kube-system kube-apiserver-master0 1/1 Running 21 (2m22s ago) 7m19s

kube-system kube-controller-manager-master0 0/1 Running 25 (66s ago) 5m51s

kube-system kube-scheduler-master0 0/1 CrashLoopBackOff 25 (62s ago) 6m

ubuntu@ip-10-0-15-82:~$ kubectl logs -f kube-apiserver-master0 -n kube-system

I1219 06:08:44.052941 1 server.go:555] external host was not specified, using 10.0.15.82

I1219 06:08:44.053880 1 server.go:163] Version: v1.26.0

I1219 06:08:44.053954 1 server.go:165] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""

I1219 06:08:44.561040 1 shared_informer.go:273] Waiting for caches to sync for node_authorizer

I1219 06:08:44.562267 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.

I1219 06:08:44.562350 1 plugins.go:161] Loaded 12 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.

W1219 06:08:44.613792 1 genericapiserver.go:660] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.

I1219 06:08:44.615115 1 instance.go:277] Using reconciler: lease

I1219 06:08:44.882566 1 instance.go:621] API group "internal.apiserver.k8s.io" is not enabled, skipping.

I1219 06:08:45.267941 1 instance.go:621] API group "resource.k8s.io" is not enabled, skipping.

W1219 06:08:45.370729 1 genericapiserver.go:660] Skipping API authentication.k8s.io/v1beta1 because it has no resources.

W1219 06:08:45.370756 1 genericapiserver.go:660] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.

W1219 06:08:45.372993 1 genericapiserver.go:660] Skipping API authorization.k8s.io/v1beta1 because it has no resources.

W1219 06:08:45.377856 1 genericapiserver.go:660] Skipping API autoscaling/v2beta1 because it has no resources.

W1219 06:08:45.377876 1 genericapiserver.go:660] Skipping API autoscaling/v2beta2 because it has no resources.

W1219 06:08:45.381127 1 genericapiserver.go:660] Skipping API batch/v1beta1 because it has no resources.

W1219 06:08:45.383665 1 genericapiserver.go:660] Skipping API certificates.k8s.io/v1beta1 because it has no resources.

W1219 06:08:45.385890 1 genericapiserver.go:660] Skipping API coordination.k8s.io/v1beta1 because it has no resources.

W1219 06:08:45.385952 1 genericapiserver.go:660] Skipping API discovery.k8s.io/v1beta1 because it has no resources.

W1219 06:08:45.391568 1 genericapiserver.go:660] Skipping API networking.k8s.io/v1beta1 because it has no resources.

W1219 06:08:45.391585 1 genericapiserver.go:660] Skipping API networking.k8s.io/v1alpha1 because it has no resources.

W1219 06:08:45.393562 1 genericapiserver.go:660] Skipping API node.k8s.io/v1beta1 because it has no resources.

W1219 06:08:45.393581 1 genericapiserver.go:660] Skipping API node.k8s.io/v1alpha1 because it has no resources.

W1219 06:08:45.393641 1 genericapiserver.go:660] Skipping API policy/v1beta1 because it has no resources.

W1219 06:08:45.399482 1 genericapiserver.go:660] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.

W1219 06:08:45.399502 1 genericapiserver.go:660] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.

W1219 06:08:45.401515 1 genericapiserver.go:660] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.

W1219 06:08:45.401537 1 genericapiserver.go:660] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.

W1219 06:08:45.407674 1 genericapiserver.go:660] Skipping API storage.k8s.io/v1alpha1 because it has no resources.

W1219 06:08:45.413355 1 genericapiserver.go:660] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.

W1219 06:08:45.413374 1 genericapiserver.go:660] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.

W1219 06:08:45.419343 1 genericapiserver.go:660] Skipping API apps/v1beta2 because it has no resources.

W1219 06:08:45.419362 1 genericapiserver.go:660] Skipping API apps/v1beta1 because it has no resources.

W1219 06:08:45.421932 1 genericapiserver.go:660] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.

W1219 06:08:45.421951 1 genericapiserver.go:660] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.

W1219 06:08:45.424241 1 genericapiserver.go:660] Skipping API events.k8s.io/v1beta1 because it has no resources.

W1219 06:08:45.479788 1 genericapiserver.go:660] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.

I1219 06:08:46.357006 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"

I1219 06:08:46.357217 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"

I1219 06:08:46.357675 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key"

I1219 06:08:46.358125 1 secure_serving.go:210] Serving securely on [::]:6443

I1219 06:08:46.358242 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"

I1219 06:08:46.363285 1 gc_controller.go:78] Starting apiserver lease garbage collector

I1219 06:08:46.363570 1 controller.go:80] Starting OpenAPI V3 AggregationController

I1219 06:08:46.363829 1 controller.go:121] Starting legacy_token_tracking_controller

I1219 06:08:46.363850 1 shared_informer.go:273] Waiting for caches to sync for configmaps

I1219 06:08:46.363877 1 apf_controller.go:361] Starting API Priority and Fairness config controller

I1219 06:08:46.363922 1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/etc/kubernetes/pki/front-proxy-client.crt::/etc/kubernetes/pki/front-proxy-client.key"

I1219 06:08:46.364009 1 available_controller.go:494] Starting AvailableConditionController

I1219 06:08:46.364019 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller

I1219 06:08:46.358328 1 autoregister_controller.go:141] Starting autoregister controller

I1219 06:08:46.364040 1 cache.go:32] Waiting for caches to sync for autoregister controller

I1219 06:08:46.366773 1 controller.go:83] Starting OpenAPI AggregationController

I1219 06:08:46.367148 1 customresource_discovery_controller.go:288] Starting DiscoveryController

I1219 06:08:46.367616 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller

I1219 06:08:46.367725 1 shared_informer.go:273] Waiting for caches to sync for cluster_authentication_trust_controller

I1219 06:08:46.367881 1 apiservice_controller.go:97] Starting APIServiceRegistrationController

I1219 06:08:46.367970 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller

I1219 06:08:46.368112 1 crdregistration_controller.go:111] Starting crd-autoregister controller

I1219 06:08:46.368191 1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister

I1219 06:08:46.383719 1 controller.go:85] Starting OpenAPI controller

I1219 06:08:46.383786 1 controller.go:85] Starting OpenAPI V3 controller

I1219 06:08:46.383812 1 naming_controller.go:291] Starting NamingConditionController

I1219 06:08:46.383830 1 establishing_controller.go:76] Starting EstablishingController

I1219 06:08:46.383852 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController

I1219 06:08:46.383871 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController

I1219 06:08:46.383893 1 crd_finalizer.go:266] Starting CRDFinalizer

I1219 06:08:46.383978 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"

I1219 06:08:46.384084 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"

I1219 06:08:46.463884 1 shared_informer.go:280] Caches are synced for configmaps

I1219 06:08:46.463927 1 apf_controller.go:366] Running API Priority and Fairness config worker

I1219 06:08:46.463935 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process

I1219 06:08:46.464063 1 cache.go:39] Caches are synced for autoregister controller

I1219 06:08:46.465684 1 cache.go:39] Caches are synced for AvailableConditionController controller

I1219 06:08:46.469795 1 shared_informer.go:280] Caches are synced for crd-autoregister

I1219 06:08:46.470150 1 shared_informer.go:280] Caches are synced for node_authorizer

I1219 06:08:46.470302 1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller

I1219 06:08:46.470438 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller

I1219 06:08:46.479224 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io

I1219 06:08:47.060404 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).

I1219 06:08:47.370998 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.

W1219 06:09:28.894719 1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {

"Addr": "127.0.0.1:2379",

"ServerName": "127.0.0.1",

"Attributes": null,

"BalancerAttributes": null,

"Type": 0,

"Metadata": null

}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"

W1219 06:09:28.895017 1 logging.go:59] [core] [Channel #13 SubChannel #14] grpc: addrConn.createTransport failed to connect to {

"Addr": "127.0.0.1:2379",

"ServerName": "127.0.0.1",

"Attributes": null,

"BalancerAttributes": null,

"Type": 0,

"Metadata": null

}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"

===중략===

W1219 06:12:22.066087 1 logging.go:59] [core] [Channel #16 SubChannel #17] grpc: addrConn.createTransport failed to connect to {

"Addr": "127.0.0.1:2379",

"ServerName": "127.0.0.1",

"Attributes": null,

"BalancerAttributes": null,

"Type": 0,

"Metadata": null

}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"

{"level":"warn","ts":"2022-12-19T06:12:22.345Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}

{"level":"warn","ts":"2022-12-19T06:12:24.346Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}

{"level":"warn","ts":"2022-12-19T06:12:26.352Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}

{"level":"warn","ts":"2022-12-19T06:12:27.457Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00354fc00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}

E1219 06:12:27.458799 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout

E1219 06:12:27.458820 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout

E1219 06:12:27.458843 1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 6.269µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>

E1219 06:12:27.460034 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout

I1219 06:12:27.461932 1 trace.go:219] Trace[630402872]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:9448a7a5-4c6b-490f-9aff-cd8384091228,client:10.0.15.82,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master0,user-agent:kubelet/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:PUT (19-Dec-2022 06:12:17.458) (total time: 10003ms):

Trace[630402872]: ["GuaranteedUpdate etcd3" audit-id:9448a7a5-4c6b-490f-9aff-cd8384091228,key:/leases/kube-node-lease/master0,type:*coordination.Lease,resource:leases.coordination.k8s.io 10003ms (06:12:17.458)

Trace[630402872]: ---"Txn call failed" err:context deadline exceeded 9998ms (06:12:27.458)]

Trace[630402872]: [10.003519094s] [10.003519094s] END

E1219 06:12:27.462368 1 timeout.go:142] post-timeout activity - time-elapsed: 3.532362ms, PUT "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master0" result: <nil>

{"level":"warn","ts":"2022-12-19T06:12:28.352Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}

{"level":"warn","ts":"2022-12-19T06:12:30.242Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}

{"level":"warn","ts":"2022-12-19T06:12:30.359Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}

{"level":"warn","ts":"2022-12-19T06:12:32.365Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}

{"level":"warn","ts":"2022-12-19T06:12:34.366Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}

{"level":"warn","ts":"2022-12-19T06:12:34.905Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001c45180/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}

E1219 06:12:34.905188 1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded

E1219 06:12:34.905331 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout

E1219 06:12:34.906483 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout

E1219 06:12:34.907611 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout

I1219 06:12:34.909171 1 trace.go:219] Trace[1232755934]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:efcbbe67-217b-4534-8361-f0ca8603169e,client:10.0.15.82,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/etcd-master0,user-agent:kubelet/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:11:34.904) (total time: 60004ms):

Trace[1232755934]: [1m0.004852843s] [1m0.004852843s] END

E1219 06:12:34.909377 1 timeout.go:142] post-timeout activity - time-elapsed: 3.983518ms, GET "/api/v1/namespaces/kube-system/pods/etcd-master0" result: <nil>

{"level":"warn","ts":"2022-12-19T06:12:36.372Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}

{"level":"warn","ts":"2022-12-19T06:12:37.458Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00354fc00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}

E1219 06:12:37.459896 1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded

E1219 06:12:37.460058 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout

E1219 06:12:37.461117 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout

E1219 06:12:37.462667 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout

I1219 06:12:37.464323 1 trace.go:219] Trace[688853594]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:c49906de-6377-43e9-86c6-8f053f5ea689,client:10.0.15.82,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master0,user-agent:kubelet/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:12:27.458) (total time: 10005ms):

Trace[688853594]: [10.005594573s] [10.005594573s] END

E1219 06:12:37.464689 1 timeout.go:142] post-timeout activity - time-elapsed: 5.065927ms, GET "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master0" result: <nil>

{"level":"warn","ts":"2022-12-19T06:12:37.984Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001ba8000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}

E1219 06:12:37.984376 1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded

E1219 06:12:37.984522 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout

E1219 06:12:37.985741 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout

I1219 06:12:37.987578 1 controller.go:615] quota admission added evaluator for: namespaces

E1219 06:12:37.988356 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout

I1219 06:12:37.990053 1 trace.go:219] Trace[931836157]: "Get" accept:application/vnd.kubernetes.protobuf, /,audit-id:90475625-91a7-4e3d-b74c-4c8971819dd4,client:::1,protocol:HTTP/2.0,resource:namespaces,scope:resource,url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:11:37.983) (total time: 60006ms):

Trace[931836157]: [1m0.006350485s] [1m0.006350485s] END

E1219 06:12:37.990484 1 timeout.go:142] post-timeout activity - time-elapsed: 4.870058ms, GET "/api/v1/namespaces/default" result: <nil>

{"level":"warn","ts":"2022-12-19T06:12:38.373Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}

I1219 06:12:39.988922 1 trace.go:219] Trace[448655361]: "List" accept:application/vnd.kubernetes.protobuf, /,audit-id:8b16c9b2-4f85-4e5d-918a-2d28acd753bb,client:::1,protocol:HTTP/2.0,resource:services,scope:cluster,url:/api/v1/services,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:LIST (19-Dec-2022 06:12:37.659) (total time: 2329ms):

Trace[448655361]: ["List(recursive=true) etcd3" audit-id:8b16c9b2-4f85-4e5d-918a-2d28acd753bb,key:/services/specs,resourceVersion:,resourceVersionMatch:,limit:0,continue: 2329ms (06:12:37.659)]

Trace[448655361]: [2.329166967s] [2.329166967s] END

{"level":"warn","ts":"2022-12-19T06:12:40.242Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}

I1219 06:12:40.249474 1 trace.go:219] Trace[239754632]: "List" accept:application/vnd.kubernetes.protobuf, /,audit-id:30b9e937-c36a-4398-9054-4a1cb1bd5edf,client:::1,protocol:HTTP/2.0,resource:resourcequotas,scope:namespace,url:/api/v1/namespaces/default/resourcequotas,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:LIST (19-Dec-2022 06:12:37.988) (total time: 2261ms):

Trace[239754632]: ["List(recursive=true) etcd3" audit-id:30b9e937-c36a-4398-9054-4a1cb1bd5edf,key:/resourcequotas/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: 2261ms (06:12:37.988)]

Trace[239754632]: [2.261402138s] [2.261402138s] END

{"level":"warn","ts":"2022-12-19T06:12:40.380Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}

{"level":"warn","ts":"2022-12-19T06:12:42.386Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}

I1219 06:12:42.442272 1 trace.go:219] Trace[1256675541]: "Get" accept:application/vnd.kubernetes.protobuf, /,audit-id:0176b32b-9911-4efd-a652-a65e9b8e5358,client:::1,protocol:HTTP/2.0,resource:namespaces,scope:resource,url:/api/v1/namespaces/kube-system,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:11:57.961) (total time: 44480ms):

Trace[1256675541]: ---"About to write a response" 44480ms (06:12:42.442)

Trace[1256675541]: [44.480780934s] [44.480780934s] END

I1219 06:12:42.446847 1 trace.go:219] Trace[1993246150]: "Create" accept:application/vnd.kubernetes.protobuf, /,audit-id:037244a1-0427-4f7b-a27f-a28053080851,client:::1,protocol:HTTP/2.0,resource:namespaces,scope:resource,url:/api/v1/namespaces,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:POST (19-Dec-2022 06:12:37.987) (total time: 4459ms):

Trace[1993246150]: ["Create etcd3" audit-id:037244a1-0427-4f7b-a27f-a28053080851,key:/namespaces/default,type:*core.Namespace,resource:namespaces 2195ms (06:12:40.251)

Trace[1993246150]: ---"Txn call succeeded" 2194ms (06:12:42.445)]

Trace[1993246150]: [4.459769012s] [4.459769012s] END

I1219 06:12:42.674053 1 trace.go:219] Trace[1794029875]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:e430c809-7f0f-466e-b055-2b6b9141ff8c,client:10.0.15.82,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-controller-manager-master0,user-agent:kubelet/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:12:34.909) (total time: 7765ms):

Trace[1794029875]: ---"About to write a response" 7764ms (06:12:42.673)

Trace[1794029875]: [7.765007745s] [7.765007745s] END

{"level":"warn","ts":"2022-12-19T06:12:44.393Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}

{"level":"warn","ts":"2022-12-19T06:12:44.971Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00354fc00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}

I1219 06:12:44.971449 1 trace.go:219] Trace[994491080]: "Update" accept:application/vnd.kubernetes.protobuf, /,audit-id:409a08f2-ec78-4882-9bfa-9ce30a084b98,client:::1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-apiserver-sbw72mnicesx7ail7r675e52gy,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:PUT (19-Dec-2022 06:12:10.970) (total time: 34001ms):

Trace[994491080]: ["GuaranteedUpdate etcd3" audit-id:409a08f2-ec78-4882-9bfa-9ce30a084b98,key:/leases/kube-system/kube-apiserver-sbw72mnicesx7ail7r675e52gy,type:*coordination.Lease,resource:leases.coordination.k8s.io 34000ms (06:12:10.970)

Trace[994491080]: ---"Txn call failed" err:context deadline exceeded 34000ms (06:12:44.971)]

Trace[994491080]: [34.001140432s] [34.001140432s] END

E1219 06:12:44.971767 1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 10.899µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>

E1219 06:12:44.972574 1 controller.go:189] failed to update lease, error: Timeout: request did not complete within requested timeout - context deadline exceeded

I1219 06:12:46.648431 1 trace.go:219] Trace[1569528607]: "Update" accept:application/vnd.kubernetes.protobuf, /,audit-id:66c304d4-07a7-4651-a080-b0a6fe1514d1,client:::1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-apiserver-sbw72mnicesx7ail7r675e52gy,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:PUT (19-Dec-2022 06:12:44.973) (total time: 1675ms):

Trace[1569528607]: ["GuaranteedUpdate etcd3" audit-id:66c304d4-07a7-4651-a080-b0a6fe1514d1,key:/leases/kube-system/kube-apiserver-sbw72mnicesx7ail7r675e52gy,type:*coordination.Lease,resource:leases.coordination.k8s.io 1675ms (06:12:44.973)

Trace[1569528607]: ---"Txn call completed" 1674ms (06:12:46.648)]

Trace[1569528607]: [1.675226852s] [1.675226852s] END

I1219 06:12:46.649989 1 trace.go:219] Trace[424403]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:7ce2d09e-ef67-46b0-9359-d7bb18552cd1,client:10.0.15.82,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master0,user-agent:kubelet/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:12:37.660) (total time: 8989ms):

Trace[424403]: ---"About to write a response" 8989ms (06:12:46.649)

Trace[424403]: [8.989433007s] [8.989433007s] END

I1219 06:12:49.083394 1 trace.go:219] Trace[50133606]: "Get" accept:application/vnd.kubernetes.protobuf, /,audit-id:790109d6-02cb-46d3-b31f-b1823eea9276,client:::1,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:12:42.453) (total time: 6630ms):

Trace[50133606]: ---"About to write a response" 6630ms (06:12:49.083)

Trace[50133606]: [6.630185906s] [6.630185906s] END

 

답변 1

답변을 작성해보세요.

0

gasbugs님의 프로필

gasbugs

2022.12.19

안녕하세요 강사 최일선입니다.

몇 달 전에 우분투 22.04에서는 알 수 없는 이상 현상이 발생하여 설치가 잘 안되는 것을 확인했습니다.

가능하시면 20.04로 진행해주시고 오늘 추가로 확인한 것은 docker.io가 아닌 conatinerd를 설치해야 가능했습니다. 업데이트로 인해서 호환성에 문제가 발생하는 것으로 보입니다. 20.04로 다시 진행해보시고 다음 내용도 참고해주시면 감사하겠습니다!

https://www.inflearn.com/questions/715940/kube-init-%EC%98%A4%EB%A5%98