2romade
@2romade7324
수강평 작성수
1
평균평점
5.0
게시글
질문&답변
deployment 관련 문의
안녕하세요~일단 eth1은 NAT 구성 하지 말라고 하셔서 구성은 안되어있는 상태였구요모든 VM을 끄고 한 이틀 뒤에 다시 켰더니 웬걸.. 정상 Running 되는중이네요 ^^;알다가도 모르겠습니다.안그래도 직접 VM을 만들어 하나하나 직접 설치하면서 기본 개념을 더 익히는 시간을 가질까했는데덕분에 좋은 자료로 수동구성 해볼 수 있을 것 같습니다.감사합니다.
- 0
- 3
- 46
질문&답변
kubectl get nodes 관련 문의
네 바쁘신 와중에도 같이 분석해주셔서 감사합니다.일단 30분 가량 VM 켜둔채로 둔 후 다시 진행해보았지만 아래와 같은 에러만 무한 반복됩니다.E0105 20:45:36.621775 7057 memcache.go:265] couldn't get current server API group list: Get "https://192.168.1.10:6443/api?timeout=32s": dial tcp 192.168.1.10:6443: i/o timeout 그리고 말씀주신 " 잘 받아진다 안 받아진다 " 는 헷갈릴 소지가 있게 제가 말씀드렸네요.ova 없이 구축하고자 했을때 가이드 주신대로 패키지나 설치파일 등을 설치했으나 제대로 되지 않았다는 말이였습니다.그래서 ova로 해보려고 지금 시도중인 상황인데, 그것도 잘 안되고 있습니다.말씀하신 디스크 성능은 원인이 아니길 바랍니다. 마지막으로 말씀 주신 로그는 하단에 띄워드리겠습니다. 더 이상 분석이 안되면 다시 가이드대로 설치해보고 안되면 이론부분이라도 습득하면서 강의를 들어야할 것 같습니다.회사 프로젝트 전에 쿠버네티스에 대해 아무것도 몰라 차근차근 배워나가고 실습하고자 시작했는데 잘 안되니 속상하네요.실습은 실제 운영하면서 배우는걸로 하고이 강의 포함 총 4개의 강의를 구매했기에 밑져야 본전 최대한 이론 습득 위주로 해도 충분할 강의라고 믿고 진행해보겠습니다.root@cp-k8s:~# crictl logs $(crictl ps -a | awk '/etcd/ {print $1}') {"level":"warn","ts":"2026-01-05T11:13:59.45457Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}{"level":"info","ts":"2026-01-05T11:13:59.456334Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.1.10:2379","--cert-file=/etc/kubernetes/pki/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.1.10:2380","--initial-cluster=cp-k8s=https://192.168.1.10:2380","--key-file=/etc/kubernetes/pki/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.1.10:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.1.10:2380","--name=cp-k8s","--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/etc/kubernetes/pki/etcd/peer.key","--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt"]}{"level":"info","ts":"2026-01-05T11:13:59.456933Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/etcd","dir-type":"member"}{"level":"warn","ts":"2026-01-05T11:13:59.457375Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}{"level":"info","ts":"2026-01-05T11:13:59.457432Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.1.10:2380"]}{"level":"info","ts":"2026-01-05T11:13:59.457591Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}{"level":"error","ts":"2026-01-05T11:13:59.458781Z","caller":"embed/etcd.go:536","msg":"creating peer listener failed","error":"listen tcp 192.168.1.10:2380: bind: cannot assign requested address","stacktrace":"go.etcd.io/etcd/server/v3/embed.configurePeerListeners\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:536\ngo.etcd.io/etcd/server/v3/embed.StartEtcd\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:131\ngo.etcd.io/etcd/server/v3/etcdmain.startEtcd\n\tgo.etcd.io/etcd/server/v3/etcdmain/etcd.go:228\ngo.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\tgo.etcd.io/etcd/server/v3/etcdmain/etcd.go:123\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\tgo.etcd.io/etcd/server/v3/etcdmain/main.go:40\nmain.main\n\tgo.etcd.io/etcd/server/v3/main.go:31\nruntime.main\n\truntime/proc.go:250"}{"level":"info","ts":"2026-01-05T11:13:59.459635Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"cp-k8s","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://192.168.1.10:2380"],"advertise-client-urls":["https://192.168.1.10:2379"]}{"level":"info","ts":"2026-01-05T11:13:59.459659Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"cp-k8s","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://192.168.1.10:2380"],"advertise-client-urls":["https://192.168.1.10:2379"]}{"level":"fatal","ts":"2026-01-05T11:13:59.459723Z","caller":"etcdmain/etcd.go:204","msg":"discovery failed","error":"listen tcp 192.168.1.10:2380: bind: cannot assign requested address","stacktrace":"go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\tgo.etcd.io/etcd/server/v3/etcdmain/etcd.go:204\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\tgo.etcd.io/etcd/server/v3/etcdmain/main.go:40\nmain.main\n\tgo.etcd.io/etcd/server/v3/main.go:31\nruntime.main\n\truntime/proc.go:250"}
- 0
- 5
- 107
질문&답변
kubectl get nodes 관련 문의
안녕하세요. 일단 말씀해주신 새 ova 파일을 받고 테스트 했는데 뭔가 반은 해결된 것 같고 반은 그대로인것 같습니다.kubectl get nodes 명령어 결과는 아직 그대로인 것 같고, api 상태나 포트는 올라와있는것으로 변경된 것 확인했습니다. root@cp-k8s:~# kubectl get nodes E0104 18:03:37.957653 742 memcache.go:265] couldn't get current server API group list: Get "https://192.168.1.10:6443/api?timeout=32s": dial tcp 192.168.1.10:6443: i/o timeout^C root@cp-k8s:~# ps -ef | grep kube-apiserverroot 983 778 43 18:03 ? 00:00:02 kube-apiserver --advertise-address=192.168.1.10 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.keyroot 1092 733 0 18:03 pts/0 00:00:00 grep --color=auto kube-apiserver root@cp-k8s:~# netstat -ntlp | grep 6443tcp 0 0 0.0.0.0:6443 0.0.0.0:* LISTEN 983/kube-apiserver root@cp-k8s:~# systemctl status kubelet● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) Drop-In: /usr/lib/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Sun 2026-01-04 18:02:50 KST; 1min 26s ago Docs: https://kubernetes.io/docs/ Main PID: 575 (kubelet) Tasks: 12 (limit: 2314) Memory: 95.8M CPU: 4.779s CGroup: /system.slice/kubelet.service └─575 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kube>Jan 04 18:03:59 cp-k8s kubelet[575]: E0104 18:03:59.920260 575 event.go:368] "Unable >Jan 04 18:04:00 cp-k8s kubelet[575]: E0104 18:04:00.185835 575 controller.go:145] "Fa>Jan 04 18:04:04 cp-k8s kubelet[575]: E0104 18:04:04.167992 575 eviction_manager.go:28>Jan 04 18:04:04 cp-k8s kubelet[575]: I0104 18:04:04.196400 575 scope.go:117] "RemoveC>Jan 04 18:04:04 cp-k8s kubelet[575]: E0104 18:04:04.197618 575 pod_workers.go:1298] ">Jan 04 18:04:05 cp-k8s kubelet[575]: I0104 18:04:05.217925 575 scope.go:117] "RemoveC>Jan 04 18:04:05 cp-k8s kubelet[575]: E0104 18:04:05.219342 575 pod_workers.go:1298] ">Jan 04 18:04:14 cp-k8s kubelet[575]: E0104 18:04:14.186167 575 eviction_manager.go:28>Jan 04 18:04:16 cp-k8s kubelet[575]: E0104 18:04:16.778839 575 controller.go:145] "Fa>Jan 04 18:04:17 cp-k8s kubelet[575]: I0104 18:04:17.037646 575 scope.go:117] "RemoveC>lines 1-23/23 (END)^C root@cp-k8s:~# journalctl -u kubelet -xeJan 04 18:03:56 cp-k8s kubelet[575]: I0104 18:03:56.998400 575 scope.go:117] "RemoveC>Jan 04 18:03:57 cp-k8s kubelet[575]: E0104 18:03:57.009098 575 pod_workers.go:1298] ">Jan 04 18:03:57 cp-k8s kubelet[575]: I0104 18:03:57.999163 575 scope.go:117] "RemoveC>Jan 04 18:03:58 cp-k8s kubelet[575]: E0104 18:03:57.999941 575 pod_workers.go:1298] ">Jan 04 18:03:59 cp-k8s kubelet[575]: E0104 18:03:59.920260 575 event.go:368] "Unable >Jan 04 18:04:00 cp-k8s kubelet[575]: E0104 18:04:00.185835 575 controller.go:145] "Fa>Jan 04 18:04:04 cp-k8s kubelet[575]: E0104 18:04:04.167992 575 eviction_manager.go:28>Jan 04 18:04:04 cp-k8s kubelet[575]: I0104 18:04:04.196400 575 scope.go:117] "RemoveC>Jan 04 18:04:04 cp-k8s kubelet[575]: E0104 18:04:04.197618 575 pod_workers.go:1298] ">Jan 04 18:04:05 cp-k8s kubelet[575]: I0104 18:04:05.217925 575 scope.go:117] "RemoveC>Jan 04 18:04:05 cp-k8s kubelet[575]: E0104 18:04:05.219342 575 pod_workers.go:1298] ">Jan 04 18:04:14 cp-k8s kubelet[575]: E0104 18:04:14.186167 575 eviction_manager.go:28>Jan 04 18:04:16 cp-k8s kubelet[575]: E0104 18:04:16.778839 575 controller.go:145] "Fa>Jan 04 18:04:17 cp-k8s kubelet[575]: I0104 18:04:17.037646 575 scope.go:117] "RemoveC>Jan 04 18:04:17 cp-k8s kubelet[575]: I0104 18:04:17.857694 575 scope.go:117] "RemoveC>Jan 04 18:04:17 cp-k8s kubelet[575]: I0104 18:04:17.859037 575 scope.go:117] "RemoveC>Jan 04 18:04:17 cp-k8s kubelet[575]: E0104 18:04:17.859755 575 pod_workers.go:1298] ">Jan 04 18:04:20 cp-k8s kubelet[575]: I0104 18:04:20.028284 575 scope.go:117] "RemoveC>Jan 04 18:04:20 cp-k8s kubelet[575]: I0104 18:04:20.030469 575 scope.go:117] "RemoveC>Jan 04 18:04:20 cp-k8s kubelet[575]: E0104 18:04:20.035034 575 pod_workers.go:1298] ">Jan 04 18:04:24 cp-k8s kubelet[575]: E0104 18:04:24.191691 575 eviction_manager.go:28>Jan 04 18:04:24 cp-k8s kubelet[575]: I0104 18:04:24.206879 575 scope.go:117] "RemoveC>Jan 04 18:04:24 cp-k8s kubelet[575]: E0104 18:04:24.210842 575 pod_workers.go:1298] ">Jan 04 18:04:24 cp-k8s kubelet[575]: I0104 18:04:24.212493 575 scope.go:117] "RemoveC>Jan 04 18:04:24 cp-k8s kubelet[575]: E0104 18:04:24.213833 575 pod_workers.go:1298] ">Jan 04 18:04:24 cp-k8s kubelet[575]: E0104 18:04:24.524448 575 kubelet_node_status.go>Jan 04 18:04:25 cp-k8s kubelet[575]: I0104 18:04:25.339877 575 kubelet_node_status.go>Jan 04 18:04:26 cp-k8s kubelet[575]: W0104 18:04:26.834142 575 reflector.go:547] k8s.>Jan 04 18:04:26 cp-k8s kubelet[575]: I0104 18:04:26.835588 575 trace.go:236] Trace[13>Jan 04 18:04:26 cp-k8s kubelet[575]: Trace[1326111383]: ---"Objects listed" error:Get "ht>Jan 04 18:04:26 cp-k8s kubelet[575]: Trace[1326111383]: [30.21465485s] [30.21465485s] ENDJan 04 18:04:26 cp-k8s kubelet[575]: E0104 18:04:26.836778 575 reflector.go:150] k8s.>Jan 04 18:04:26 cp-k8s kubelet[575]: W0104 18:04:26.856201 575 reflector.go:547] k8s.>Jan 04 18:04:26 cp-k8s kubelet[575]: I0104 18:04:26.857476 575 trace.go:236] Trace[80>Jan 04 18:04:26 cp-k8s kubelet[575]: Trace[801660039]: ---"Objects listed" error:Get "htt>Jan 04 18:04:26 cp-k8s kubelet[575]: Trace[801660039]: [30.003630427s] [30.003630427s] ENDJan 04 18:04:26 cp-k8s kubelet[575]: E0104 18:04:26.857502 575 reflector.go:150] k8s.>Jan 04 18:04:27 cp-k8s kubelet[575]: W0104 18:04:27.260236 575 reflector.go:547] k8s.>Jan 04 18:04:27 cp-k8s kubelet[575]: I0104 18:04:27.261021 575 trace.go:236] Trace[10>Jan 04 18:04:27 cp-k8s kubelet[575]: Trace[1055366097]: ---"Objects listed" error:Get "ht>Jan 04 18:04:27 cp-k8s kubelet[575]: Trace[1055366097]: [30.146430352s] [30.146430352s] E>Jan 04 18:04:27 cp-k8s kubelet[575]: E0104 18:04:27.261058 575 reflector.go:150] k8s.> 그 후 kubeadm certs check-expiration 하면 다음과 같이 보이는데.. Error reading configuration from the Cluster. Falling back to default configuration root@cp-k8s:~# kubeadm certs check-expiration[check-expiration] Reading configuration from the cluster...[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system getcm kubeadm-config -o yaml'[check-expiration] Error reading configuration from the Cluster. Falling back to default configurationCERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGEDadmin.conf Dec 28, 2035 05:49 UTC 9y ca noapiserver Dec 28, 2035 05:49 UTC 9y ca noapiserver-etcd-client Dec 28, 2035 05:49 UTC 9y etcd-ca noapiserver-kubelet-client Dec 28, 2035 05:49 UTC 9y ca nocontroller-manager.conf Dec 28, 2035 05:49 UTC 9y ca noetcd-healthcheck-client Dec 28, 2035 05:49 UTC 9y etcd-ca noetcd-peer Dec 28, 2035 05:49 UTC 9y etcd-ca noetcd-server Dec 28, 2035 05:49 UTC 9y etcd-ca nofront-proxy-client Dec 28, 2035 05:49 UTC 9y front-proxy-ca noscheduler.conf Dec 28, 2035 05:49 UTC 9y ca nosuper-admin.conf Dec 28, 2035 05:49 UTC 9y ca noCERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGEDca Dec 28, 2035 05:48 UTC 9y noetcd-ca Dec 28, 2035 05:48 UTC 9y nofront-proxy-ca Dec 28, 2035 05:48 UTC 9y noroot@cp-k8s:~# 일단 제가 여기서 추가로 더 놓치고 있는 부분이 있을까요?
- 0
- 5
- 107
질문&답변
kubectl get nodes 관련 문의
안녕하세요 친절한 답변 감사드립니다.일단 기존에 받아뒀다는 개념이 무슨 말인지 모르겠으나, 엊그제 처음 ova 파일을 받았고, 수업 노트에 탑재된 링크를 클릭하면 onedrive 2024년 6월 5일 modified 된 ova 파일로 연결되어 그걸 받았습니다. 또한 명령어 내용 공유드립니다.root@cp-k8s:~# kubeadm certs check-expiration [check-expiration] Reading configuration from the cluster...[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system getcm kubeadm-config -o yaml'[check-expiration] Error reading configuration from the Cluster. Falling back to default configurationCERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGEDadmin.conf Jun 01, 2034 00:40 UTC 8y ca noapiserver Jun 01, 2034 00:40 UTC 8y ca noapiserver-etcd-client Jun 01, 2034 00:40 UTC 8y etcd-ca noapiserver-kubelet-client Jun 01, 2034 00:40 UTC 8y ca nocontroller-manager.conf Jun 01, 2034 00:40 UTC 8y ca noetcd-healthcheck-client Jun 01, 2034 00:40 UTC 8y etcd-ca noetcd-peer Jun 01, 2034 00:40 UTC 8y etcd-ca noetcd-server Jun 01, 2034 00:40 UTC 8y etcd-ca nofront-proxy-client Jun 01, 2034 00:40 UTC 8y front-proxy-ca noscheduler.conf Jun 01, 2034 00:40 UTC 8y ca nosuper-admin.conf Jun 01, 2034 00:40 UTC 8y ca noCERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGEDca Jun 01, 2034 00:40 UTC 8y noetcd-ca Jun 01, 2034 00:40 UTC 8y nofront-proxy-ca Jun 01, 2034 00:40 UTC 8y no 감사합니다.
- 0
- 5
- 107
질문&답변
kubectl get nodes 관련 문의
안녕하세요 제가 잘 이해한 것이 맞다면1. 수업 노트에 있는 해당 OVA를 다운받았습니다. OVA: 2.3(v1.30.0)2. 이거는 업무 이후 퇴근하여 확인해서 알려드리겠습니다. 감사합니다.
- 0
- 5
- 107




