월 55,000원
5개월 할부 시다른 수강생들이 자주 물어보는 질문이 궁금하신가요?
- 미해결데브옵스(DevOps)를 위한 쿠버네티스 마스터
강의자료 최신화 요청 드립니다.
안녕하세요, 강의자료 다운로드 화면에서 받아지는 PDF가 현재 강의 내용과 차이가 있습니다.자료 최신화 부탁드립니다~
- 해결됨데브옵스(DevOps)를 위한 쿠버네티스 마스터
재접속시 worker 노드 STATUS NotReady 이슈
안녕하세요 강의 너무 잘듣고 있습니다.강의가 길어서 Google cloud VM인스턴스에 ssh로 접속을 하고, 끊었다 다시 하곤 하는데요. 다시 연결할 때 worker-1, worker-2가 NotReady상태로 변하곤해서 질문드립니다 (그림-1). worker 노드 역할을 하는 인스턴스에 접속할 때는 그림-2와 같은 에러 메시지가 뜨고요. 인스턴스 재설정 여부에 따라 문제가 해결되기는 하는데요.인스턴스 재설정 -> Cloud Identity-Aware Proxy를 사용하지 않고 재시도 -> ssh 접속 -> Ready상태인스턴스 재설정 (X) -> Cloud Identity-Aware Proxy를 사용하지 않고 재시도 -> 그림-3과 같은 에러창매번 이러다 보니까 솔루션이 있는지 궁금하여 문의드립니다.그림-1그림-2그림-3
- 미해결데브옵스(DevOps)를 위한 쿠버네티스 마스터
static-pod.yaml 생성 후 에러 발생
GCP 에 가상 머신으로 환경 만들어서 하고 있습니다.여기서 static-pod.yaml 생성 후 kubectl get pods 했을 때 에러가 나옵니다.방화벽에서 포트 오픈 해야 되나요? 그 전 시간 까지는 에러가 없이 잘 됐었습니다..확인 부탁드립니다...
- 미해결데브옵스(DevOps)를 위한 쿠버네티스 마스터
docker설치해서 tomcat 설치 할때 이미지 선택
안녕하세요. docker 설치 후 tomcat 설치할때 버전이 7.0.57 인 이미지가 안보입니다.위에 올려둔 사진 중에 어떤 이미지로 설치하면 될까요?(똑같이 안해도 되는건지,,도ㅎㅎ 문의 드립니다)
- 미해결데브옵스(DevOps)를 위한 쿠버네티스 마스터
network policy 연습문제
안녕하세요. network policy 연습문제를 풀다가 궁금한 점이 있어 문의드립니다.위에처럼 되어야 하는게 맞는데 저는 지금전부 막혔어요.설정은 강사님과 똑같이 주었습니다.# network-policy-an.yaml # ingress-v1 apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: http-go-v1-ingress namespace: default spec: podSelector: matchLabels: app: http-go-v1 policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: app: http-go-v2 --- # ingress-v2 apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: http-go-v2-ingress namespace: default spec: podSelector: matchLabels: app: http-go-v2 policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: app: http-go-v3 --- # ingress-v3 apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: http-go-v3-ingress namespace: default spec: podSelector: matchLabels: app: http-go-v3 policyTypes: - Ingress 정책 설정도 잘 되었구요.이렇게 v:3 → v:2 나 v:3 → v:1로 가는 것조차 막혀버렸습니다.코드 130 에러는 서치해도 안나와서요.어떤 걸 보고 또 설정을 해줘야 통신이 되는지 알려주셨으면 합니다.
- 미해결데브옵스(DevOps)를 위한 쿠버네티스 마스터
kubeadm init error
안녕하세요. 다음과 같은 문제가 있어 문의드립니다.다음과 같은 에러가 발생하고 정상적으로 진행하지 못하고 있습니다.docker는 정상적으로 설치되어 있고, cgoups 드라이버는 systemd로 설정되어 있습니다.답변부탁드립니다.********************************************************************************************[kubelet-check] It seems like the kubelet isn't running or healthy.[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.[kubelet-check] It seems like the kubelet isn't running or healthy.[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.[kubelet-check] It seems like the kubelet isn't running or healthy.[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.[kubelet-check] It seems like the kubelet isn't running or healthy.[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.[kubelet-check] It seems like the kubelet isn't running or healthy.[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.Unfortunately, an error has occurred: timed out waiting for the conditionThis error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet'Additionally, a control plane component may have crashed or exited when started by the container runtime.To troubleshoot, list all containers using your preferred container runtimes CLI.Here is one example how you may list all running Kubernetes containers by using crictl: - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'error execution phase wait-control-plane: couldn't initialize a Kubernetes clusterTo see the stack trace of this error execute with --v=5 or higher
- 미해결데브옵스(DevOps)를 위한 쿠버네티스 마스터
노션 접근불가
안녕하세요. 강의 듣다가 갑자기 노션에 접근 불가가 떠서요ㅜㅜ 왜 이러는지 알 수 있을까요
- 미해결데브옵스(DevOps)를 위한 쿠버네티스 마스터
vmware 실습환경문의
vmware workstation에서 vmx 파일을 실행하니 ics03~ 경로의 이미지를 찾을 수 없어 stat0:1 connection error? 같은 게 발생하는 데 어떻게 해결할 수 있을까요?
- 미해결데브옵스(DevOps)를 위한 쿠버네티스 마스터
nodeport tomcat 문의드립니다.
tomcat으로 노드포트 30002 연동하는게 안되서요.일단 서비스랑 np, lb 합쳐서 yaml 만드니까 자꾸 에러가 나서 svc.yaml / np-lb.yaml 을 따로 만들어서 create 하였습니다.tomcat-svc.yamlapiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: tomcat name: tomcat spec: replicas: 1 selector: matchLabels: app: tomcat strategy: {} template: metadata: creationTimestamp: null labels: app: tomcat spec: containers: - image: tomcat name: tomcat ports: - containerPort: 8080 resources: {} status: {}create 성공 tomcat-np-lb.yaml# tomcat-np-lb.yaml # # nodeport apiVersion: v1 kind: Service metadata: name: tomcat-np spec: type: NodePort selector: app: tomcat app.kubernetes.io/name: tomcat ports: - port: 80 targetPort: 8080 nodePort: 30002 --- # LB # apiVersion: v1 kind: Service metadata: name: tomcat-lb spec: type: LoadBalancer selector: app: tomcat app.kubernetes.io/name: tomcat ports: - protocol: TCP port: 80 targetPort: 8080create 성공 둘다 성공해서 -o wide 해서 봤는데LB ip 노드포트(30002) 접속30228 포트는 LB 포트입니다.톰캣 자체가 좀 오래 걸려서 기다렸는데도 불구하고 안돼요. 버전을 console/tomcat-7.0으로 변경해도 그대로입니다. connection refused도 계속 되고요.구글 콘솔로도 확인했는데 deploy랑 전부 만들어진거 확인했습니다.근데도 톰캣 사이트가 안떠요 ㅜㅜ어디가 잘못됐는지 잘 모르겠습니다.ㅜㅜ
- 미해결데브옵스(DevOps)를 위한 쿠버네티스 마스터
docker: unauthorized: authentication required
Jupyter LAB 환경 구축 시docker: unauthorized: authentication required. 라고 나오면서 정상적으로 실행이 되지 않습니다.해결방법이 있을까요?
- 미해결데브옵스(DevOps)를 위한 쿠버네티스 마스터
kubeadm init 후 반복적 재시작 오류
안녕하세요 컴퓨팅 자원 부족으로 aws응 사용해서 kubeadm init 을 시도하여 성공은 했으나 kube-system의 컨트롤플래인 파드들이 지속적으로 재시작되는 에러가 발생하였습니다.에러해결에 조그마한 실마리가되는 내용이라도 주시면 감사하겠습니다!!! ㅠㅠ지금까지의 로그로 추정되는 부분은 kube-apiserver이 재시작되면서 -> 다른 모든 컨트롤 플래인들도 통신 불가로 재시작하는것으로 보입니다.환경os: ubuntu server 22.04 LTS (HVM), SSD Volume Typecpu 2core / memory 4gdisk: root- 10g / 별도마운트 - 20g(docker, k8s사용 공간)도커 버전docker-ce=5:20.10.20~3-0~ubuntu-jammy \docker-ce-cli=5:20.10.20~3-0~ubuntu-jammy \containerd.io=1.6.8-1 \docker-compose-plugin=2.12.0~ubuntu-jammy쿠버네티스 버전kubelet=1.26.0-00 \kubeadm=1.26.0-00 \kubelet=1.26.0-00 docker와 k8s가 사용할 디스크공간인 /container 디렉토리에 20g디스크 별도로 마운트 이미 진행함 /etc/docker/daemon.json 변경{ "data-root": "/container/docker","exec-opts": ["native.cgroupdriver=systemd"] }kubeadm init중 cri관련 오류 발생하여 검색하여 아래내용 주석처리로 해결vi /etc/containerd/config.toml 수정# disabled_plugins = ["cri"]방화벽 해제sudo ufw disableiptable설정아래 링크대로 진행https://kubernetes.io/ko/docs/setup/production-environment/container-runtimes/#ipv4%EB%A5%BC-%ED%8F%AC%EC%9B%8C%EB%94%A9%ED%95%98%EC%97%AC-iptables%EA%B0%80-%EB%B8%8C%EB%A6%AC%EC%A7%80%EB%90%9C-%ED%8A%B8%EB%9E%98%ED%94%BD%EC%9D%84-%EB%B3%B4%EA%B2%8C-%ED%95%98%EA%B8%B0아래링크 참고하여 kubernetes가 사용할 디스크공간 변경https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/#the-kubelet-drop-in-file-for-systemdvi /etc/default/kubelet 에 아래내용 추가KUBELET_EXTRA_ARGS=--root-dir="/container/k8s"kubeadm initkubeadm init --skip-phases=addon/kube-proxykubeadm init으로 진행하였으나 계속된 reset후 다시 init 과정에서 kube-proxy 실패로 아래 에러로그는 kube-proxy단계를 스킵하는 명령어로 진행함 에러로그 첨부가 불가능 하여 아래 남깁니다!ubuntu@ip-10-0-15-82:~$ kubectl get nodeNAME STATUS ROLES AGE VERSIONmaster0 NotReady control-plane 7m10s v1.26.0ubuntu@ip-10-0-15-82:~$ kubectl describe node master0Name: master0Roles: control-planeLabels: beta.kubernetes.io/arch=amd64beta.kubernetes.io/os=linuxkubernetes.io/arch=amd64kubernetes.io/hostname=master0kubernetes.io/os=linuxnode-role.kubernetes.io/control-plane=node.kubernetes.io/exclude-from-external-load-balancers=Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.socknode.alpha.kubernetes.io/ttl: 0volumes.kubernetes.io/controller-managed-attach-detach: trueCreationTimestamp: Mon, 19 Dec 2022 06:03:24 +0000Taints: node-role.kubernetes.io/control-plane:NoSchedulenode.kubernetes.io/not-ready:NoScheduleUnschedulable: falseLease:HolderIdentity: master0AcquireTime: <unset>RenewTime: Mon, 19 Dec 2022 06:13:57 +0000Conditions:Type Status LastHeartbeatTime LastTransitionTime Reason Message---- ------ ----------------- ------------------ ------ -------MemoryPressure False Mon, 19 Dec 2022 06:13:52 +0000 Mon, 19 Dec 2022 06:03:21 +0000 KubeletHasSufficientMemory kubelet has sufficient memory availableDiskPressure False Mon, 19 Dec 2022 06:13:52 +0000 Mon, 19 Dec 2022 06:03:21 +0000 KubeletHasNoDiskPressure kubelet has no disk pressurePIDPressure False Mon, 19 Dec 2022 06:13:52 +0000 Mon, 19 Dec 2022 06:03:21 +0000 KubeletHasSufficientPID kubelet has sufficient PID availableReady False Mon, 19 Dec 2022 06:13:52 +0000 Mon, 19 Dec 2022 06:03:21 +0000 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initializedAddresses:InternalIP: 10.0.15.82Hostname: master0Capacity:cpu: 2ephemeral-storage: 20470Mihugepages-2Mi: 0memory: 4015088Kipods: 110Allocatable:cpu: 2ephemeral-storage: 19317915617hugepages-2Mi: 0memory: 3912688Kipods: 110System Info:Machine ID: f8b760a7c2274e0cb62621465dbcab92System UUID: ec21d23a-a384-2b77-91df-2f108bd6b565Boot ID: 12f267e0-d0f3-4193-b84a-d7dbcfd74b2bKernel Version: 5.15.0-1026-awsOS Image: Ubuntu 22.04.1 LTSOperating System: linuxArchitecture: amd64Container Runtime Version: containerd://1.6.8Kubelet Version: v1.26.0Kube-Proxy Version: v1.26.0Non-terminated Pods: (4 in total)Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age--------- ---- ------------ ---------- --------------- ------------- ---kube-system etcd-master0 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 9m12skube-system kube-apiserver-master0 250m (12%) 0 (0%) 0 (0%) 0 (0%) 10mkube-system kube-controller-manager-master0 200m (10%) 0 (0%) 0 (0%) 0 (0%) 9m7skube-system kube-scheduler-master0 100m (5%) 0 (0%) 0 (0%) 0 (0%) 9m16sAllocated resources:(Total limits may be over 100 percent, i.e., overcommitted.)Resource Requests Limits-------- -------- ------cpu 650m (32%) 0 (0%)memory 100Mi (2%) 0 (0%)ephemeral-storage 0 (0%) 0 (0%)hugepages-2Mi 0 (0%) 0 (0%)Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Starting 10m kubelet Starting kubelet.Warning InvalidDiskCapacity 10m kubelet invalid capacity 0 on image filesystemNormal NodeHasSufficientMemory 10m kubelet Node master0 status is now: NodeHasSufficientMemoryNormal NodeHasNoDiskPressure 10m kubelet Node master0 status is now: NodeHasNoDiskPressureNormal NodeHasSufficientPID 10m kubelet Node master0 status is now: NodeHasSufficientPIDNormal NodeAllocatableEnforced 10m kubelet Updated Node Allocatable limit across podsNormal RegisteredNode 9m37s node-controller Node master0 event: Registered Node master0 in ControllerNormal RegisteredNode 7m10s node-controller Node master0 event: Registered Node master0 in ControllerNormal RegisteredNode 4m57s node-controller Node master0 event: Registered Node master0 in ControllerNormal RegisteredNode 3m11s node-controller Node master0 event: Registered Node master0 in ControllerNormal RegisteredNode 25s node-controller Node master0 event: Registered Node master0 in Controller ubuntu@ip-10-0-15-82:~$ kubectl get po -ANAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-787d4945fb-bkhkm 0/1 Pending 0 6m20skube-system coredns-787d4945fb-d4t28 0/1 Pending 0 6m20skube-system etcd-master0 1/1 Running 20 (78s ago) 5m56skube-system kube-apiserver-master0 1/1 Running 21 (2m22s ago) 7m19skube-system kube-controller-manager-master0 0/1 Running 25 (66s ago) 5m51skube-system kube-scheduler-master0 0/1 CrashLoopBackOff 25 (62s ago) 6mubuntu@ip-10-0-15-82:~$ kubectl logs -f kube-apiserver-master0 -n kube-systemI1219 06:08:44.052941 1 server.go:555] external host was not specified, using 10.0.15.82I1219 06:08:44.053880 1 server.go:163] Version: v1.26.0I1219 06:08:44.053954 1 server.go:165] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""I1219 06:08:44.561040 1 shared_informer.go:273] Waiting for caches to sync for node_authorizerI1219 06:08:44.562267 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.I1219 06:08:44.562350 1 plugins.go:161] Loaded 12 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.W1219 06:08:44.613792 1 genericapiserver.go:660] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.I1219 06:08:44.615115 1 instance.go:277] Using reconciler: leaseI1219 06:08:44.882566 1 instance.go:621] API group "internal.apiserver.k8s.io" is not enabled, skipping.I1219 06:08:45.267941 1 instance.go:621] API group "resource.k8s.io" is not enabled, skipping.W1219 06:08:45.370729 1 genericapiserver.go:660] Skipping API authentication.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.370756 1 genericapiserver.go:660] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.372993 1 genericapiserver.go:660] Skipping API authorization.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.377856 1 genericapiserver.go:660] Skipping API autoscaling/v2beta1 because it has no resources.W1219 06:08:45.377876 1 genericapiserver.go:660] Skipping API autoscaling/v2beta2 because it has no resources.W1219 06:08:45.381127 1 genericapiserver.go:660] Skipping API batch/v1beta1 because it has no resources.W1219 06:08:45.383665 1 genericapiserver.go:660] Skipping API certificates.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.385890 1 genericapiserver.go:660] Skipping API coordination.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.385952 1 genericapiserver.go:660] Skipping API discovery.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.391568 1 genericapiserver.go:660] Skipping API networking.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.391585 1 genericapiserver.go:660] Skipping API networking.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.393562 1 genericapiserver.go:660] Skipping API node.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.393581 1 genericapiserver.go:660] Skipping API node.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.393641 1 genericapiserver.go:660] Skipping API policy/v1beta1 because it has no resources.W1219 06:08:45.399482 1 genericapiserver.go:660] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.399502 1 genericapiserver.go:660] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.401515 1 genericapiserver.go:660] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.401537 1 genericapiserver.go:660] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.407674 1 genericapiserver.go:660] Skipping API storage.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.413355 1 genericapiserver.go:660] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.413374 1 genericapiserver.go:660] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.419343 1 genericapiserver.go:660] Skipping API apps/v1beta2 because it has no resources.W1219 06:08:45.419362 1 genericapiserver.go:660] Skipping API apps/v1beta1 because it has no resources.W1219 06:08:45.421932 1 genericapiserver.go:660] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.421951 1 genericapiserver.go:660] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.424241 1 genericapiserver.go:660] Skipping API events.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.479788 1 genericapiserver.go:660] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.I1219 06:08:46.357006 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"I1219 06:08:46.357217 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"I1219 06:08:46.357675 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key"I1219 06:08:46.358125 1 secure_serving.go:210] Serving securely on [::]:6443I1219 06:08:46.358242 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"I1219 06:08:46.363285 1 gc_controller.go:78] Starting apiserver lease garbage collectorI1219 06:08:46.363570 1 controller.go:80] Starting OpenAPI V3 AggregationControllerI1219 06:08:46.363829 1 controller.go:121] Starting legacy_token_tracking_controllerI1219 06:08:46.363850 1 shared_informer.go:273] Waiting for caches to sync for configmapsI1219 06:08:46.363877 1 apf_controller.go:361] Starting API Priority and Fairness config controllerI1219 06:08:46.363922 1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/etc/kubernetes/pki/front-proxy-client.crt::/etc/kubernetes/pki/front-proxy-client.key"I1219 06:08:46.364009 1 available_controller.go:494] Starting AvailableConditionControllerI1219 06:08:46.364019 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controllerI1219 06:08:46.358328 1 autoregister_controller.go:141] Starting autoregister controllerI1219 06:08:46.364040 1 cache.go:32] Waiting for caches to sync for autoregister controllerI1219 06:08:46.366773 1 controller.go:83] Starting OpenAPI AggregationControllerI1219 06:08:46.367148 1 customresource_discovery_controller.go:288] Starting DiscoveryControllerI1219 06:08:46.367616 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controllerI1219 06:08:46.367725 1 shared_informer.go:273] Waiting for caches to sync for cluster_authentication_trust_controllerI1219 06:08:46.367881 1 apiservice_controller.go:97] Starting APIServiceRegistrationControllerI1219 06:08:46.367970 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controllerI1219 06:08:46.368112 1 crdregistration_controller.go:111] Starting crd-autoregister controllerI1219 06:08:46.368191 1 shared_informer.go:273] Waiting for caches to sync for crd-autoregisterI1219 06:08:46.383719 1 controller.go:85] Starting OpenAPI controllerI1219 06:08:46.383786 1 controller.go:85] Starting OpenAPI V3 controllerI1219 06:08:46.383812 1 naming_controller.go:291] Starting NamingConditionControllerI1219 06:08:46.383830 1 establishing_controller.go:76] Starting EstablishingControllerI1219 06:08:46.383852 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionControllerI1219 06:08:46.383871 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionControllerI1219 06:08:46.383893 1 crd_finalizer.go:266] Starting CRDFinalizerI1219 06:08:46.383978 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"I1219 06:08:46.384084 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"I1219 06:08:46.463884 1 shared_informer.go:280] Caches are synced for configmapsI1219 06:08:46.463927 1 apf_controller.go:366] Running API Priority and Fairness config workerI1219 06:08:46.463935 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing processI1219 06:08:46.464063 1 cache.go:39] Caches are synced for autoregister controllerI1219 06:08:46.465684 1 cache.go:39] Caches are synced for AvailableConditionController controllerI1219 06:08:46.469795 1 shared_informer.go:280] Caches are synced for crd-autoregisterI1219 06:08:46.470150 1 shared_informer.go:280] Caches are synced for node_authorizerI1219 06:08:46.470302 1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controllerI1219 06:08:46.470438 1 cache.go:39] Caches are synced for APIServiceRegistrationController controllerI1219 06:08:46.479224 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.ioI1219 06:08:47.060404 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).I1219 06:08:47.370998 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.W1219 06:09:28.894719 1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {"Addr": "127.0.0.1:2379","ServerName": "127.0.0.1","Attributes": null,"BalancerAttributes": null,"Type": 0,"Metadata": null}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"W1219 06:09:28.895017 1 logging.go:59] [core] [Channel #13 SubChannel #14] grpc: addrConn.createTransport failed to connect to {"Addr": "127.0.0.1:2379","ServerName": "127.0.0.1","Attributes": null,"BalancerAttributes": null,"Type": 0,"Metadata": null}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"===중략===W1219 06:12:22.066087 1 logging.go:59] [core] [Channel #16 SubChannel #17] grpc: addrConn.createTransport failed to connect to {"Addr": "127.0.0.1:2379","ServerName": "127.0.0.1","Attributes": null,"BalancerAttributes": null,"Type": 0,"Metadata": null}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"{"level":"warn","ts":"2022-12-19T06:12:22.345Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:24.346Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:26.352Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:27.457Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00354fc00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}E1219 06:12:27.458799 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeoutE1219 06:12:27.458820 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeoutE1219 06:12:27.458843 1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 6.269µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>E1219 06:12:27.460034 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeoutI1219 06:12:27.461932 1 trace.go:219] Trace[630402872]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:9448a7a5-4c6b-490f-9aff-cd8384091228,client:10.0.15.82,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master0,user-agent:kubelet/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:PUT (19-Dec-2022 06:12:17.458) (total time: 10003ms):Trace[630402872]: ["GuaranteedUpdate etcd3" audit-id:9448a7a5-4c6b-490f-9aff-cd8384091228,key:/leases/kube-node-lease/master0,type:*coordination.Lease,resource:leases.coordination.k8s.io 10003ms (06:12:17.458)Trace[630402872]: ---"Txn call failed" err:context deadline exceeded 9998ms (06:12:27.458)]Trace[630402872]: [10.003519094s] [10.003519094s] ENDE1219 06:12:27.462368 1 timeout.go:142] post-timeout activity - time-elapsed: 3.532362ms, PUT "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master0" result: <nil>{"level":"warn","ts":"2022-12-19T06:12:28.352Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:30.242Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:30.359Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:32.365Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:34.366Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:34.905Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001c45180/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}E1219 06:12:34.905188 1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceededE1219 06:12:34.905331 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeoutE1219 06:12:34.906483 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeoutE1219 06:12:34.907611 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeoutI1219 06:12:34.909171 1 trace.go:219] Trace[1232755934]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:efcbbe67-217b-4534-8361-f0ca8603169e,client:10.0.15.82,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/etcd-master0,user-agent:kubelet/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:11:34.904) (total time: 60004ms):Trace[1232755934]: [1m0.004852843s] [1m0.004852843s] ENDE1219 06:12:34.909377 1 timeout.go:142] post-timeout activity - time-elapsed: 3.983518ms, GET "/api/v1/namespaces/kube-system/pods/etcd-master0" result: <nil>{"level":"warn","ts":"2022-12-19T06:12:36.372Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:37.458Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00354fc00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}E1219 06:12:37.459896 1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceededE1219 06:12:37.460058 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeoutE1219 06:12:37.461117 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeoutE1219 06:12:37.462667 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeoutI1219 06:12:37.464323 1 trace.go:219] Trace[688853594]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:c49906de-6377-43e9-86c6-8f053f5ea689,client:10.0.15.82,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master0,user-agent:kubelet/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:12:27.458) (total time: 10005ms):Trace[688853594]: [10.005594573s] [10.005594573s] ENDE1219 06:12:37.464689 1 timeout.go:142] post-timeout activity - time-elapsed: 5.065927ms, GET "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master0" result: <nil>{"level":"warn","ts":"2022-12-19T06:12:37.984Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001ba8000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}E1219 06:12:37.984376 1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceededE1219 06:12:37.984522 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeoutE1219 06:12:37.985741 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeoutI1219 06:12:37.987578 1 controller.go:615] quota admission added evaluator for: namespacesE1219 06:12:37.988356 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeoutI1219 06:12:37.990053 1 trace.go:219] Trace[931836157]: "Get" accept:application/vnd.kubernetes.protobuf, /,audit-id:90475625-91a7-4e3d-b74c-4c8971819dd4,client:::1,protocol:HTTP/2.0,resource:namespaces,scope:resource,url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:11:37.983) (total time: 60006ms):Trace[931836157]: [1m0.006350485s] [1m0.006350485s] ENDE1219 06:12:37.990484 1 timeout.go:142] post-timeout activity - time-elapsed: 4.870058ms, GET "/api/v1/namespaces/default" result: <nil>{"level":"warn","ts":"2022-12-19T06:12:38.373Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}I1219 06:12:39.988922 1 trace.go:219] Trace[448655361]: "List" accept:application/vnd.kubernetes.protobuf, /,audit-id:8b16c9b2-4f85-4e5d-918a-2d28acd753bb,client:::1,protocol:HTTP/2.0,resource:services,scope:cluster,url:/api/v1/services,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:LIST (19-Dec-2022 06:12:37.659) (total time: 2329ms):Trace[448655361]: ["List(recursive=true) etcd3" audit-id:8b16c9b2-4f85-4e5d-918a-2d28acd753bb,key:/services/specs,resourceVersion:,resourceVersionMatch:,limit:0,continue: 2329ms (06:12:37.659)]Trace[448655361]: [2.329166967s] [2.329166967s] END{"level":"warn","ts":"2022-12-19T06:12:40.242Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}I1219 06:12:40.249474 1 trace.go:219] Trace[239754632]: "List" accept:application/vnd.kubernetes.protobuf, /,audit-id:30b9e937-c36a-4398-9054-4a1cb1bd5edf,client:::1,protocol:HTTP/2.0,resource:resourcequotas,scope:namespace,url:/api/v1/namespaces/default/resourcequotas,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:LIST (19-Dec-2022 06:12:37.988) (total time: 2261ms):Trace[239754632]: ["List(recursive=true) etcd3" audit-id:30b9e937-c36a-4398-9054-4a1cb1bd5edf,key:/resourcequotas/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: 2261ms (06:12:37.988)]Trace[239754632]: [2.261402138s] [2.261402138s] END{"level":"warn","ts":"2022-12-19T06:12:40.380Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:42.386Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}I1219 06:12:42.442272 1 trace.go:219] Trace[1256675541]: "Get" accept:application/vnd.kubernetes.protobuf, /,audit-id:0176b32b-9911-4efd-a652-a65e9b8e5358,client:::1,protocol:HTTP/2.0,resource:namespaces,scope:resource,url:/api/v1/namespaces/kube-system,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:11:57.961) (total time: 44480ms):Trace[1256675541]: ---"About to write a response" 44480ms (06:12:42.442)Trace[1256675541]: [44.480780934s] [44.480780934s] ENDI1219 06:12:42.446847 1 trace.go:219] Trace[1993246150]: "Create" accept:application/vnd.kubernetes.protobuf, /,audit-id:037244a1-0427-4f7b-a27f-a28053080851,client:::1,protocol:HTTP/2.0,resource:namespaces,scope:resource,url:/api/v1/namespaces,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:POST (19-Dec-2022 06:12:37.987) (total time: 4459ms):Trace[1993246150]: ["Create etcd3" audit-id:037244a1-0427-4f7b-a27f-a28053080851,key:/namespaces/default,type:*core.Namespace,resource:namespaces 2195ms (06:12:40.251)Trace[1993246150]: ---"Txn call succeeded" 2194ms (06:12:42.445)]Trace[1993246150]: [4.459769012s] [4.459769012s] ENDI1219 06:12:42.674053 1 trace.go:219] Trace[1794029875]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:e430c809-7f0f-466e-b055-2b6b9141ff8c,client:10.0.15.82,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-controller-manager-master0,user-agent:kubelet/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:12:34.909) (total time: 7765ms):Trace[1794029875]: ---"About to write a response" 7764ms (06:12:42.673)Trace[1794029875]: [7.765007745s] [7.765007745s] END{"level":"warn","ts":"2022-12-19T06:12:44.393Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:44.971Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00354fc00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}I1219 06:12:44.971449 1 trace.go:219] Trace[994491080]: "Update" accept:application/vnd.kubernetes.protobuf, /,audit-id:409a08f2-ec78-4882-9bfa-9ce30a084b98,client:::1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-apiserver-sbw72mnicesx7ail7r675e52gy,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:PUT (19-Dec-2022 06:12:10.970) (total time: 34001ms):Trace[994491080]: ["GuaranteedUpdate etcd3" audit-id:409a08f2-ec78-4882-9bfa-9ce30a084b98,key:/leases/kube-system/kube-apiserver-sbw72mnicesx7ail7r675e52gy,type:*coordination.Lease,resource:leases.coordination.k8s.io 34000ms (06:12:10.970)Trace[994491080]: ---"Txn call failed" err:context deadline exceeded 34000ms (06:12:44.971)]Trace[994491080]: [34.001140432s] [34.001140432s] ENDE1219 06:12:44.971767 1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 10.899µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>E1219 06:12:44.972574 1 controller.go:189] failed to update lease, error: Timeout: request did not complete within requested timeout - context deadline exceededI1219 06:12:46.648431 1 trace.go:219] Trace[1569528607]: "Update" accept:application/vnd.kubernetes.protobuf, /,audit-id:66c304d4-07a7-4651-a080-b0a6fe1514d1,client:::1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-apiserver-sbw72mnicesx7ail7r675e52gy,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:PUT (19-Dec-2022 06:12:44.973) (total time: 1675ms):Trace[1569528607]: ["GuaranteedUpdate etcd3" audit-id:66c304d4-07a7-4651-a080-b0a6fe1514d1,key:/leases/kube-system/kube-apiserver-sbw72mnicesx7ail7r675e52gy,type:*coordination.Lease,resource:leases.coordination.k8s.io 1675ms (06:12:44.973)Trace[1569528607]: ---"Txn call completed" 1674ms (06:12:46.648)]Trace[1569528607]: [1.675226852s] [1.675226852s] ENDI1219 06:12:46.649989 1 trace.go:219] Trace[424403]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:7ce2d09e-ef67-46b0-9359-d7bb18552cd1,client:10.0.15.82,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master0,user-agent:kubelet/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:12:37.660) (total time: 8989ms):Trace[424403]: ---"About to write a response" 8989ms (06:12:46.649)Trace[424403]: [8.989433007s] [8.989433007s] ENDI1219 06:12:49.083394 1 trace.go:219] Trace[50133606]: "Get" accept:application/vnd.kubernetes.protobuf, /,audit-id:790109d6-02cb-46d3-b31f-b1823eea9276,client:::1,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:12:42.453) (total time: 6630ms):Trace[50133606]: ---"About to write a response" 6630ms (06:12:49.083)Trace[50133606]: [6.630185906s] [6.630185906s] END
- 미해결데브옵스(DevOps)를 위한 쿠버네티스 마스터
kube init 오류
다른 분들 글도 보고 구글링도 해봤는데 해결되지 않아 올립니다강의에 올려 주신 것처럼 이렇게도 해보고다른 분들 해결책도 해봤는데 그대로 되지않습니다
- 미해결데브옵스(DevOps)를 위한 쿠버네티스 마스터
기존 docker container port 변경 방법 문의
docker run -d -p 8080:8080 --name nx nginx위 명령어로 실행하여 container 를 실행중입니다.nx 의 container 의 port 현재 8080을 8082번으로 포트 변경을 하고자할 때 어떻게 해야하나요?기존 실행중인 nx 를 삭제 후 다시 올리지 않고 기존 nx 를 가지고 port 변경을 하는 방법을 알고 싶습니다. (예. 기존 nx 컨테이너를 중지 후 config 변경한다.)기존 nx를 실행중인 상태에서도 port 변경이 가능한 방법이 있는지 궁금합니다. 참고로, 기존 실행중인 nx를 중지시킨 후hostconfig.json 파일과 config.v2.json 파일에서 8080 -> 8082로 수정 후 nx를 start 시켰는데, 원복 상태인 8080 포트로 port mapping 이 된 것으로 확인하였습니다.
- 미해결데브옵스(DevOps)를 위한 쿠버네티스 마스터
master node connetction error
- 학습 관련 질문을 남겨주세요. 상세히 작성하면 더 좋아요! - 먼저 유사한 질문이 있었는지 검색해보세요. - 서로 예의를 지키며 존중하는 문화를 만들어가요. - 잠깐! 인프런 서비스 운영 관련 문의는 1:1 문의하기를 이용해주세요. 안녕하세요. 현재 포드디스크립터 부분을 시청 중이며 실습은 GCP 프로젝트에서 진행하고 있습니다.문제점 1.go-http-pod.yaml 작성 후 create 시The connection to the server 10.128.0.6:6443 was refused - did you specify the right host or port?라는 에러가 발생합니다.gcp에서 vm 생성 후 마스터노드 및 워커노드 생성(통신 되는 것 확인)kubectl get pod / kubectl get nodes 명령어 실행 시에도 port refused 문제 발생방화벽 문제라고 판단하여 rule 추가(6443 포트 allow) 하였으나 해결되지 않음kubelet stop/restart 시 kubectl get pod 실행 가능하지만 5분 이내 다시 port refused 문제 발생root@master:~/yaml# kubectl get nodes The connection to the server 10.128.0.6:6443 was refused - did you specify the right host or port? root@master:~/yaml# systemctl restart kubelet root@master:~/yaml# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready control-plane 17h v1.25.3 work-node NotReady <none> 17h v1.25.3kubelet 때문에 발생하는 문제 같은데 지운 뒤 실습해도 문제가 없나요? 문제점 2.http-go pod를 생성했으나 ContainerCreating 에서 멈춰있습니다.root@master:~/yaml# kubectl get pod NAME READY STATUS RESTARTS AGE http-go 0/1 ContainerCreating 0 17h 문제점 1 명령어 히스토리 첨부합니다. 64 systemctl stop kubelet 65 systemctl start kubelet 66 kubectl get pod 67 kubectl get node 68 kubectl get nodes 69 strace -eopenat kubectl version 70 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 71 podsecuritypolicy.policy/psp.flannel.unprivileged created 72 clusterrole.rbac.authorization.k8s.io/flannel created 73 clusterrolebinding.rbac.authorization.k8s.io/flannel created 74 serviceaccount/flannel created 75 configmap/kube-flannel-cfg created 76 daemonset.apps/kube-flannel-ds created 77 kubectl get pod 78 ls -al .kube/ 79 cat .kube/config 80 cat .kube/cache/ 81 systemctl restart kubelet 82 kubectl get pod 83 export KUBECONFIG=/home/$(whoami)/.kube/config 84 netstat -tulpn | grep -i 6443 85 /usr/local/bin/kube-apiserver \\ 86 --advertise-address=${INTERNAL_IP} \\ 87 --allow-privileged=true \\ 88 --apiserver-count=3 \\ 89 --audit-log-maxage=30 \\ 90 --audit-log-maxbackup=3 \\ 91 --audit-log-maxsize=100 \\ 92 --audit-log-path=/var/log/audit.log \\ 93 --authorization-mode=Node,RBAC \\ 94 --bind-address=0.0.0.0 \\ 95 --client-ca-file=/var/lib/kubernetes/ca.crt \\ 96 --enable-admission-plugins=NodeRestriction,ServiceAccount \\ 97 --enable-swagger-ui=true \\ 98 --enable-bootstrap-token-auth=true \\ 99 --etcd-cafile=/var/lib/kubernetes/ca.crt \\ 100 --etcd-certfile=/var/lib/kubernetes/etcd-server.crt \\ 101 --etcd-keyfile=/var/lib/kubernetes/etcd-server.key \\ 102 --etcd-servers=https://192.168.5.11:2379,https://192.168.5.12:2379 \\ 103 --event-ttl=1h \\ 104 --encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\ 105 --kubelet-certificate-authority=/var/lib/kubernetes/ca.crt \\ 106 --kubelet-client-certificate=/var/lib/kubernetes/kube-apiserver.crt \\ 107 --kubelet-client-key=/var/lib/kubernetes/kube-apiserver.key \\ 108 --kubelet-https=true \\ 109 --runtime-config=api/all \\ 110 --service-account-key-file=/var/lib/kubernetes/service-account.crt \\ 111 --service-cluster-ip-range=10.96.0.0/24 \\ 112 --service-node-port-range=30000-32767 \\ 113 --tls-cert-file=/var/lib/kubernetes/kube-apiserver.crt \\ 114 --tls-private-key-file=/var/lib/kubernetes/kube-apiserver.key \\ 115 --v=2 116 kubectl get pod + 여러번에 재설치 끝에 다시 작동 되었으나master-work 노드의 not ready 상태 를 ready 상태로 변경하기 위해서 시도한 kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"이 명령어가 문제였던 것 같습니다. root@master-node:~# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Unable to connect to the server: dial tcp: lookup cloud.weave.works on 169.254.169.254:53: no such host root@master-node:~# kubectl get nodes Get "https://10.128.0.4:6443/api/v1/nodes?limit=500": dial tcp 10.128.0.4:6443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=1, ErrCode=NO_ERROR, debug="" root@master-node:~# kubectl get nodes The connection to the server 10.128.0.4:6443 was refused - did you specify the right host or port? root@master-node:~# 명령어 시도 후 이렇게 뜨는데요다른 방법이 있으시면 알려주셨으면 합니다. ㅜㅜ
- 미해결데브옵스(DevOps)를 위한 쿠버네티스 마스터
docker registry가 not found로 나옵니다
- 학습 관련 질문을 남겨주세요. 상세히 작성하면 더 좋아요! - 먼저 유사한 질문이 있었는지 검색해보세요. - 서로 예의를 지키며 존중하는 문화를 만들어가요. - 잠깐! 인프런 서비스 운영 관련 문의는 1:1 문의하기를 이용해주세요. root@server1-VirtualBox:~# sudo docker search tomcatError response from daemon: Get https://index.docker.io/v1/search?q=tomcat&n=25: x509: certificate has expired or is not yet validroot@server1-VirtualBox:~# 위에 처럼 나오는데 어떻게 해야할까요?
- 미해결데브옵스(DevOps)를 위한 쿠버네티스 마스터
퍼블릭클라우드서비스 쿠버네티스에서도 인증서 관리를 별도 해야하나요?
퍼블릭클라우드서비스 쿠버네티스에서도 인증서 관리를 별도 해야하나요?안녕하세요.퍼블릭클라우드서비스에서 제공하는 클라우드서비스 GKS, EKS, AKS에서도 인증서 관리를 해주어야하는 지 궁금합니다.
- 미해결데브옵스(DevOps)를 위한 쿠버네티스 마스터
퍼블릭클라우드서비스 쿠버네티스에서도 인증서 관리를 별도 해야하나요?
안녕하세요. 퍼블릭클라우드서비스에서 제공하는 클라우드서비스 GKS, EKS, AKS에서도 인증서 관리를 해주어야하는 지 궁금합니다.
- 미해결데브옵스(DevOps)를 위한 쿠버네티스 마스터
pod 내부에서의 통신
안녕하세요 실습중 궁금한 내용이 생겨서 질문합니다.# nslookup 을 통해 naver.com 의 IP 주소 가져오기 mijung_ko_11st@cloudshell:~/yaml (crypto-snow-361311)$ nslookup naver.com Server: 169.254.169.254 Address: 169.254.169.254#53 Non-authoritative answer: Name: naver.com Address: 223.130.200.107 Name: naver.com Address: 223.130.195.95 Name: naver.com Address: 223.130.195.200 Name: naver.com Address: 223.130.200.104 # yaml 파일 생성 mijung_ko_11st@cloudshell:~/yaml (crypto-snow-361311)$ cat endpoint.yaml apiVersion: v1 kind: Service metadata: name: external-service spec: ports: - protocol: TCP port: 80 targetPort: 80 --- apiVersion: v1 kind: Endpoints metadata: name: external-service subsets: - addresses: - ip: 223.130.200.107 - ip: 223.130.195.95 - ip: 223.130.195.200 - ip: 223.130.200.104 ports: - port: 80 mijung_ko_11st@cloudshell:~/yaml (crypto-snow-361311)$ kubectl create -f endpoint.yaml service/external-service created endpoints/external-service created mijung_ko_11st@cloudshell:~/yaml (crypto-snow-361311)$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE external-service ClusterIP 10.8.10.232 <none> 80/TCP 71s kubernetes ClusterIP 10.8.0.1 <none> 443/TCP 73s # 통신 확인을 위한 어플리케이션이 뜨는 pod 생성 ijung_ko_11st@cloudshell:~/yaml (crypto-snow-361311)$ kubectl create deploy --image=gasbugs/http-go http-go2 deployment.apps/http-go2 created # pod 내부로 들어가서 통신을 해보면, 성공한 것을 볼 수 있다 (302 를 뱉긴하는데 통신 여부만 판단하겠음) mijung_ko_11st@cloudshell:~/yaml (crypto-snow-361311)$ kubectl get pod NAME READY STATUS RESTARTS AGE http-go2-7f5469bc46-lc5bb 1/1 Running 0 34s mijung_ko_11st@cloudshell:~/yaml (crypto-snow-361311)$ kubectl exec -it http-go2-7f5469bc46-lc5bb -- bash root@http-go2-7f5469bc46-lc5bb:/usr/src/app# curl external-service <html> <head><title>302 Found</title></head> <body> <center><h1>302 Found</h1></center> <hr><center> NWS </center> </body> </html>여기까지 따라왔습니다.그런데, pod 안에서 curl 명령어를 다른 사이트에 날려봐도 200 OK 가 떨어지는데, 이건 왜 이렇게 동작하는 건가요? ㅠㅠmijung_ko_11st@cloudshell:~/yaml (crypto-snow-361311)$ kubectl exec -it http-go2-7f5469bc46-lc5bb -- bash root@http-go2-7f5469bc46-lc5bb:/usr/src/app# curl -v www.11st.co.kr ... * Trying 113.217.247.90... * TCP_NODELAY set * Expire in 200 ms for 4 (transfer 0x5603f970bdd0) * Connected to www.11st.co.kr (113.217.247.90) port 80 (#0) > GET / HTTP/1.1 > Host: www.11st.co.kr > User-Agent: curl/7.64.0 > Accept: */* > < HTTP/1.1 200 < Date: Sat, 03 Sep 2022 15:26:11 GMT < Server: Apache < X-Content-Type-Options: nosniff < X-XSS-Protection: 1; mode=block < Cache-Control: no-cache, no-store, max-age=0, must-revalidate < Pragma: no-cache < Expires: 0 < X-Frame-Options: DENY < Content-Type: text/html;charset=UTF-8 < Content-Language: ko-KR < Set-Cookie: WMONID=DUZRgT4PfJn; Expires=Mon, 04-Sep-2023 00:26:11 GMT; Path=/ < Vary: Accept-Encoding,User-Agent < Access-Control-Allow-Credentials: true < Transfer-Encoding: chunked < Via: STON Edge Server/22.06.1 ...뭔가 pod 에서 외부와의 통신을 위해서 endpoint 리소스를 생성해야 하는데, 그렇다는 것은 이러한 통신은 불가해야하는거 아닐까? 라는 생각이 들어서요 ㅠㅠ이게 올바른 동작인가요? 아니라면 왜 이럴까요...?
- 미해결데브옵스(DevOps)를 위한 쿠버네티스 마스터
rolling update 중 time out error
안녕하세요.http-go 이미지를 바꿀때마다 순단이 일어나는데요rolling update 컨셉을 보면 순단이 일어나면 안될 것 같은데 이유가 뭘까요?Welcome! v2 Welcome! v1 Welcome! v2 wget: can't connect to remote host (10.8.1.107): Connection timed out Welcome! v2 Welcome! v2 Welcome! v2 ... Welcome! v2 Welcome! v3 Welcome! v3 Welcome! v3 wget: can't connect to remote host (10.8.1.107): Connection timed out Welcome! v3 Welcome! v3 Welcome! v3 ...두 번 모두 모든 트래픽이 새로운 어플리케이션 버전이 배포된 pod 로보내지기 전에 순단이 1회 발생합니다!
- 미해결데브옵스(DevOps)를 위한 쿠버네티스 마스터
볼륨마운트가 제대로 수행이 안됩니다.
# volume mount :: docker run -v {host path}:{container path}:{permission ro 읽기전용, rw 읽기 및 쓰기} root@ip-10-192-147-31:/var/www># docker run -d -p 8000:80 -v /var/www:/usr/share/nginx/html:ro nginx a028accb69bf2cfc604a36f76e3aa85ee98d2247869c909519026bae7a5a1863 # 마운트 시켜놓은 /var/www 경로에 아무것도 없어서 403 에러가 발생함 root@ip-10-192-147-31:/var/www># curl localhost:8000 <html> <head><title>403 Forbidden</title></head> <body> <center><h1>403 Forbidden</h1></center> <hr><center>nginx/1.23.1</center> </body> </html> * Connection #0 to host localhost left intact # index.html 파일 생성 root@ip-10-192-147-31:/var/www># echo test1234 > index.html root@ip-10-192-147-31:/root># ls -al /var/www/ 합계 4 drwxr-x--- 2 root root 24 9월 2 18:04 . drwxr-xr-x 21 root root 293 9월 2 17:47 .. -rwxrwxrwx 1 root root 9 9월 2 18:04 index.html안녕하세요~ 호스트의 볼륨을 컨테이너에 마운트해주는 실습중 궁금한 점이 생겼습니다.여기까지 해주고, localhost:8000 으로 접속하면 test1234 가 정상적으로 떠야할 거서 같은데요# ...? 왜 안보이지 root@ip-10-192-147-31:/var/www># curl -v localhost:8000 * Trying 127.0.0.1:8000... * Connected to localhost (127.0.0.1) port 8000 (#0) > GET / HTTP/1.1 > Host: localhost:8000 > User-Agent: curl/7.79.1 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 403 Forbidden < Server: nginx/1.23.1 < Date: Fri, 02 Sep 2022 09:04:36 GMT < Content-Type: text/html < Content-Length: 153 < Connection: keep-alive < <html> <head><title>403 Forbidden</title></head> <body> <center><h1>403 Forbidden</h1></center> <hr><center>nginx/1.23.1</center> </body> </html> * Connection #0 to host localhost left intact # 컨테이너 안에 들어가면, 볼륨 마운트는 잘 되어 있음 root@b7819c1de150:/usr/share/nginx/html# cat index.html test1234뜨지 않습니다 ㅠㅠ 원인이 무엇일까요?