55,000원
다른 수강생들이 자주 물어보는 질문이 궁금하신가요?
- 미해결대세는 쿠버네티스 [초급~중급]
DaemonSet - HostPort 실습중에 에러
안녕하세요 DaemonSet - HostPort 실습중에 노드 아이피로 마스터 노드에서 진행중인데 에러가 발생해서 왜 안되는지 궁금해서 문의남깁니다!혹시 서비스가 필요가 없는건가요?? 아니면 서비스 부분이 누락이 되서그런건지 .. 포트나 방화벽도 생각해야되는건지 궁금해서 여쭤봅니다.
- 해결됨대세는 쿠버네티스 [초급~중급]
Longhorn, pvc,pv 활용하여 pod 생성 시 오류 발생
안녕하세요. 강의 너무나도 잘듣고 있는 수강생입니다.현재 Statefulset 강의를 듣고 있는 중인데요.PVC 생성까지는 문제없으나(PV도 자동생성),ReplicaSet으로 Pod를 생성하거나 혹은 직접 Pod를 생성하여 PVC, PV에 연결되어 longhorn의 볼륨에 연결이 되었을 때, 아래와 같이 해당 불륨이 생성이 되지 않으면서 에러가 납니다.(계속 attaching 상태)앞 강의에서 Volume 중급편에서 Longhorn과 연동하여 Pod 를 생성할때는 문제가 없었던 걸로 기억을 합니다.제가 그래서 혹시나 해서 PVC를 임의로 아래와 같이 하나더 만들고 pod를 만들어보니 pod 생성이 안되네요(정확히는 longhorn과 Volume 연결을 할때 문제가 생기네요)apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-fast1 spec: accessModes: - ReadWriteOnce resources: requests: storage: 1G storageClassName: "fast" 혹시 몰라서 아래와 같이 fast2 라는 새로운 storageclass도 만들어서 다시 pvc 생성후 replicaSet 생성하여 Pod 생성, longhorn의 volume과 연동하였으나(물론 연동작업은 자동으로 이루어지는) 문제가 해결되지 않네요ㅠ혹시 대시보드를 삭제하고 2.0으로 다시 만들면서 문제가 생긴걸까요?현상에 대해 좀더 자세히 설명드리면 아래와 같이 파드가 계속 재생성되려고하다가 생성이 안되어 다시 파드를 새로 만들어서 생성하려고 하는 현상이 지속됩니다.[Longhorn 시스템 상태][Longhorn쪽 에러]
- 미해결대세는 쿠버네티스 [초급~중급]
컨테이너 내 curl 명령어 안됨. (자료실 이미지 이름 다름)
실습 자료실에 있는 스크립트의 image 이름이 kubetm이라고 올라와있습니다.컨테이너 내에서 curl 명령어가 없는 명령어라고 해서 강의를 잘 보니 강의에 사용하는 이미지는 tmkube네요.확인 부탁드립니다.
- 미해결대세는 쿠버네티스 [초급~중급]
externalTrafficPolicy 질문입니다.
apiVersion: v1 kind: Service metadata: name: svc-2 spec: selector: app: pod ports: - port: 9000 targetPort: 8080 nodePort: 30001 type: NodePort externalTrafficPolicy: Localkind: Service apiVersion: v1 metadata: name: svc-2 namespace: default uid: fb123857-fa60-42d3-ab9c-f03a1a7b6348 resourceVersion: '814181' creationTimestamp: '2023-11-27T12:38:12Z' managedFields: - manager: dashboard operation: Update apiVersion: v1 time: '2023-11-27T12:38:12Z' fieldsType: FieldsV1 fieldsV1: f:spec: f:externalTrafficPolicy: {} f:internalTrafficPolicy: {} f:ports: .: {} k:{"port":9000,"protocol":"TCP"}: .: {} f:nodePort: {} f:port: {} f:protocol: {} f:targetPort: {} f:selector: {} f:sessionAffinity: {} f:type: {} spec: ports: - protocol: TCP port: 9000 targetPort: 8080 nodePort: 30001 selector: app: pod clusterIP: 10.100.174.243 clusterIPs: - 10.100.174.243 type: NodePort sessionAffinity: None externalTrafficPolicy: Local ipFamilies: - IPv4 ipFamilyPolicy: SingleStack internalTrafficPolicy: Cluster status: loadBalancer: {} 워커 2번 주소로 반복적으로 실험해봤는데 워커노드가 2개가 번갈아서 나오는게 정상적인건지 확인차 질문드려봅니다.워커 1번 주소로는 pod-1만 노출이 되고있습니다.워커 2번 주소도 마찬가지로 pod-2만 응답해야되는게 아닌가요?
- 미해결대세는 쿠버네티스 [초급~중급]
vagrant up 수행시 404 에러가 발생 하네요.
https://vagrantcloud.com/rockylinux/boxes/8/versions/8.0.0/providers/virtualbox/unknown/vagrant.box 해당 url에서 404오류가 발생 되어 진행이 되지 않네요.
- 미해결대세는 쿠버네티스 [초급~중급]
안녕하세요 kubetm/p8000 실습에 사용되는 이미지들
arm64 에서도 사용 가능하게 변경해주실수 있나요 ..CrashLoopBackOff이 발생하는데 CPU 아키텍처가 안맞아서 그런거 같아서요
- 미해결대세는 쿠버네티스 [초급~중급]
실습 파일 링크 위치?
안녕하세요. 예전에 동영상 아래에 수업 자료 받을 수 있는 링크 있었는데, 업데이트 됐는지 보이질 않네요. 실습할 때 보는 자료 어디서 받을 수 있나요?
- 미해결대세는 쿠버네티스 [초급~중급]
혹시 맥 M시리즈에서 쿠버네티스 로컬구성하는 방법에 대해서는 계획이 없으신가요 ??
혹시 맥 M시리즈에서 쿠버네티스 로컬구성하는 방법에 대해서는 계획이 없으신가요 ?? 강의를 듣고있는데 실습이 쉽지 않아서요
- 미해결대세는 쿠버네티스 [초급~중급]
kubernetes timezone
강사님 안녕하세요k8s 강좌로 항상 큰 도움을 받고 있습니다. 다름이 아니라 이번에는 k8s timezone 관련해서 여쭤보고 싶은 부분이 있는데요..컨테이너별로 timezone을 설정하는 내용은 확인했는데container에 timezone을 asia/seoul로 설정했더라도파드 생성 yaml 스크립트를 보면creationTime이 UTC값으로 들어있더라구요kubernetes 클러스터가 UTC값을 사용하는 것 같은데혹시 kubernetes 자체의 timezone을 확인할 수 있는 명령어라던가timezone을 UTC에서 다른 것으로 변경이 가능할까요? 항상 너무 감사드립니다.오늘도 좋은 하루 되시길 바랍니다.
- 해결됨대세는 쿠버네티스 [초급~중급]
1.27 Volume Longhorn 설치 오류
안녕하세요 강사님 Volume 강의 실습을 위해 master 노드에서 아래의 명령어 실행하였으나 에러가 발생하여 문의 드립니다.node1, node2에서는 설치가 정상적으로 되고 있습니다. yum install -y iscsi-initiator-utils
- 미해결대세는 쿠버네티스 [초급~중급]
curl 설치
안녕하세요 우선 좋은 강의 감사합니다!현재 namespace 강의를 듣고 있는 중이고,한 namespace 내 pod에서 다른 namespace의 pod ip로 curl 명령치는 테스트 실습해보고 있는데요curl 명령어를 찾을 수 없다는 에러가 뜨는데, curl을 직접 설치하고자 apt update 부터 했는데요아래와 같이 404 not found가 떠서요... 이런식으로 curl 설치하는게 아닌가요?root@pod-1:/# apt updateIgn:1 http://deb.debian.org/debian stretch InReleaseIgn:2 http://deb.debian.org/debian stretch-updates InReleaseErr:3 http://deb.debian.org/debian stretch Release 404 Not FoundErr:4 http://deb.debian.org/debian stretch-updates Release 404 Not FoundIgn:5 http://security.debian.org/debian-security stretch/updates InReleaseErr:6 http://security.debian.org/debian-security stretch/updates Release 404 Not Found [IP: 151.101.2.132 80]Reading package lists... DoneE: The repository 'http://deb.debian.org/debian stretch Release' does not have a Release file.N: Updating from such a repository can't be done securely, and is therefore disabled by default.N: See apt-secure(8) manpage for repository creation and user configuration details.E: The repository 'http://deb.debian.org/debian stretch-updates Release' does not have a Release file.N: Updating from such a repository can't be done securely, and is therefore disabled by default.N: See apt-secure(8) manpage for repository creation and user configuration details.E: The repository 'http://security.debian.org/debian-security stretch/updates Release' does not have a Release file.N: Updating from such a repository can't be done securely, and is therefore disabled by default.N: See apt-secure(8) manpage for repository creation and user configuration details.Namespace, ResourceQuota, LimitRange - Namespace, ResourceQuNamespace, ResourceQuota, LimitRange - 실습ota, LimitRange - 실습
- 해결됨대세는 쿠버네티스 [초급~중급]
vagrant halt 시 master-node 종료 불가
C:\Users\inchangson\k8s>vagrant halt ==> k8s-node2: Attempting graceful shutdown of VM... ==> k8s-node1: Attempting graceful shutdown of VM... ==> k8s-master: Attempting graceful shutdown of VM... k8s-master: Guest communication could not be established! This is usually because k8s-master: SSH is not running, the authentication information was changed, k8s-master: or some other networking issue. Vagrant will force halt, if k8s-master: capable. ==> k8s-master: Forcing shutdown of VM... vagrant 종료 시 항상 master node만 제대로 종료가 되지 않는데 어떤 식으로 해결하면 될까요?기다리다 Ctrl+C 할 시 아래와 같은 메시지가 남고이후 vagrant halt 날리면 삭제 되었는지 아무 로그가 남지 않습니다. ^C C:\Users\inchangson\k8s>==> k8s-master: Waiting for cleanup before exiting... C:\Users\inchangson\k8s>==> k8s-master: Exiting immediately, without cleanup! Traceback (most recent call last): 55: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/bin/vagrant:231:in `<main>' 54: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/environment.rb:290:in `cli' 53: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/cli.rb:67:in `execute' 52: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/plugins/commands/halt/command.rb:30:in `execute' 51: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/plugin/v2/command.rb:232:in `with_target_vms' 50: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/plugin/v2/command.rb:232:in `each' 49: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/plugin/v2/command.rb:243:in `block in with_target_vms' 48: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/plugins/commands/halt/command.rb:31:in `block in execute' 47: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/machine.rb:201:in `action' 46: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/machine.rb:201:in `call' 45: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/environment.rb:614:in `lock' 44: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/machine.rb:215:in `block in action' 43: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/machine.rb:246:in `action_raw' 42: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/runner.rb:89:in `run' 41: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/util/busy.rb:19:in `busy' 40: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/runner.rb:89:in `block in run' 39: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/builder.rb:149:in `call' 38: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/warden.rb:48:in `call' 37: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/plugins/providers/virtualbox/action/check_virtualbox.rb:26:in `call' 36: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/warden.rb:48:in `call' 35: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/builtin/call.rb:53:in `call' 34: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/runner.rb:89:in `run' 33: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/util/busy.rb:19:in `busy' 32: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/runner.rb:89:in `block in run' 31: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/builder.rb:149:in `call' 30: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/warden.rb:48:in `call' 29: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/warden.rb:127:in `block in finalize_action' 28: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/warden.rb:48:in `call' 27: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/plugins/providers/virtualbox/action/check_accessible.rb:18:in `call' 26: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/warden.rb:48:in `call' 25: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/plugins/providers/virtualbox/action/discard_state.rb:15:in `call' 24: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/warden.rb:48:in `call' 23: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/builtin/call.rb:53:in `call' 22: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/runner.rb:89:in `run' 21: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/util/busy.rb:19:in `busy' 20: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/runner.rb:89:in `block in run' 19: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/builder.rb:149:in `call' 18: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/warden.rb:48:in `call' 17: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/warden.rb:127:in `block in finalize_action' 16: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/warden.rb:48:in `call' 15: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/warden.rb:127:in `block in finalize_action' 14: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/warden.rb:48:in `call' 13: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/builtin/call.rb:53:in `call' 12: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/runner.rb:89:in `run' 11: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/util/busy.rb:19:in `busy' 10: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/runner.rb:89:in `block in run' 9: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/builder.rb:149:in `call' 8: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/warden.rb:48:in `call' 7: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/warden.rb:127:in `block in finalize_action' 6: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/action/warden.rb:48:in `call' 5: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/plugins/providers/virtualbox/action/forced_halt.rb:13:in `call' 4: from C:/HashiCorp/Vagrant/embedded/mingw64/lib/ruby/2.7.0/forwardable.rb:235:in `halt' 3: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/plugins/providers/virtualbox/driver/version_5_0.rb:416:in `halt' 2: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/plugins/providers/virtualbox/driver/base.rb:398:in `execute' 1: from C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/lib/vagrant/util/retryable.rb:17:in `retryable' C:/HashiCorp/Vagrant/embedded/gems/2.2.18/gems/vagrant-2.2.18/plugins/providers/virtualbox/driver/base.rb:440:in `block in execute': There was an error while executing `VBoxManage`, a CLI used by Vagrant (Vagrant::Errors::VBoxManageError) for controlling VirtualBox. The command and stderr is shown below. Command: ["controlvm", "91ca85fb-bd5d-4570-b664-3be7fcc7aceb", "poweroff"] Stderr: 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%\r
- 미해결대세는 쿠버네티스 [초급~중급]
ingress 관련 질문 드립니다.
안녕하세요, 먼저 좋은 강의 감사합니다.Ingress-실습 강의를 보면서 진행하던 중 막히는 부분이 있어 질문 남깁니다. 저는 클라우드 환경에서 작업을 하고 있고, 인스턴스 목록은 다음과 같습니다.MS-worker-02 CentOS 7.9 10.2.0.67MS-worker-01 CentOS 7.9 10.2.0.102MS-master CentOS 7.9 10.2.0.72쿠버네티스 서비스는 다음과 같습니다.[centos@ms-master ~]$ kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20hsvc-customer ClusterIP 10.97.122.176 <none> 8080/TCP 5h27msvc-order ClusterIP 10.96.220.47 <none> 8080/TCP 5h27msvc-shopping ClusterIP 10.106.190.227 <none> 8080/TCP 5h27m[centos@ms-master ~]$ kubectl get svc -n ingress-nginxNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEingress-nginx ClusterIP 10.105.252.136 10.2.0.72 80/TCP,443/TCP 43mingress-nginx-controller NodePort 10.99.180.38 <none> 80:31167/TCP,443:31190/TCP 5h42mingress-nginx-controller-admission ClusterIP 10.105.89.17 <none> 443/TCP 외부에서 ingress를 통해 서비스에 접근하려면 어떻게 해야하나요?제가 생각하기로는 ingress의 포트가 31167이라고 할 때 로드밸런서 포트, 인스턴스 포트를 (80, 31167)로 가지는 로드밸런서를 만들어주고, <로드밸런서 floating ip>/order 이런식으로 접근하면 결과가 나와야한다고 생각하는데 안돼서요...서비스를 NodePort 타입으로 만들어주고 로드밸런서를 서비스 자체에 붙이는 경우에는 잘 됩니다.인스턴스 내부에서 curl <인그레스 주소>:<ingress-controller의 노드 포트>/order처럼 했을 때도 결과가 잘 나오는 걸 봐서는 인그레스 자체는 잘 구성이 된 듯한데, 이를 외부로 어떻게 보낼 수 있을까요?인그레스 컨트롤러도 결국엔 서비스인데, 왜 안 되는지 모르겠습니다.혹은 인그레스 컨트롤러 자체를 load balancer 타입으로 만들어주는 게 맞는 것 같기도한데, 저는 클라우드 환경이긴 하지만 직접 노드를 구성하다보니 external IP가 자동으로 생기지는 않더라고요.외부에서 만든 로드밸런서의 ip나 이런 정보들을 load balancer 타입의 인그레스 컨트롤러 yaml 등에 설정을 직접 해줄 수가 있나요? 혹은 클라우드 환경이라하더라도 쿠버네티스를 직접 구축하는 경우 로드밸런서는 온프레미스 환경과 동일하게 MetalLB를 써야하는 걸까요?
- 해결됨대세는 쿠버네티스 [초급~중급]
vagrant up 생성 도중 자동 삭제 오류
dfvagrant up Bringing machine 'k8s-master' up with 'virtualbox' provider... Bringing machine 'k8s-node1' up with 'virtualbox' provider... Bringing machine 'k8s-node2' up with 'virtualbox' provider... ==> k8s-master: Importing base box 'centos/7'... ==> k8s-master: Matching MAC address for NAT networking... ==> k8s-master: Checking if box 'centos/7' version '2004.01' is up to date... ==> k8s-master: Setting the name of the VM: k8s_k8s-master_1698817614061_95680 ==> k8s-master: Clearing any previously set network interfaces... ==> k8s-master: Preparing network interfaces based on configuration... k8s-master: Adapter 1: nat k8s-master: Adapter 2: hostonly ==> k8s-master: Forwarding ports... k8s-master: 22 (guest) => 2222 (host) (adapter 1) ==> k8s-master: Running 'pre-boot' VM customizations... ==> k8s-master: Booting VM... ==> k8s-master: Waiting for machine to boot. This may take a few minutes... k8s-master: SSH address: 127.0.0.1:2222 k8s-master: SSH username: vagrant k8s-master: SSH auth method: private key k8s-master: k8s-master: Vagrant insecure key detected. Vagrant will automatically replace k8s-master: this with a newly generated keypair for better security. k8s-master: k8s-master: Inserting generated public key within guest... k8s-master: Removing insecure key from the guest if it's present... k8s-master: Key inserted! Disconnecting and reconnecting using new SSH key... ==> k8s-master: Machine booted and ready! ==> k8s-master: Checking for guest additions in VM... k8s-master: No guest additions were detected on the base box for this VM! Guest k8s-master: additions are required for forwarded ports, shared folders, host only k8s-master: networking, and more. If SSH fails on this machine, please install k8s-master: the guest additions and repackage the box to continue. k8s-master: k8s-master: This is not an error message; everything may continue to work properly, k8s-master: in which case you may ignore this message. ==> k8s-master: Attempting graceful shutdown of VM... ==> k8s-master: Destroying VM and associated drives... C:/Program Files (x86)/Vagrant/embedded/gems/gems/i18n-1.14.1/lib/i18n.rb:210:in `translate': wrong number of arguments (given 2, expected 0..1) (ArgumentError)이렇게 진행이 되며 VM 이 생성 되다가 삭제 됩니다.ㅠㅠ뭐가 잘못되고 있는 걸까요..?답변 부탁 드립니다.
- 해결됨대세는 쿠버네티스 [초급~중급]
vagrant up -> imag pull 단계 멈춤
vagarant up 시 [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'단계에서 1시간 가량 멈추어서 질문드립니다. 서버 로그는 아래와 같으며VirtualBox, vagrant, host os 버젼은 교재와 동일합니다. vagrant vbguest 의 경우 3.0 버젼으로 진행 시 중간에 마운트 에러가 발생하여 0.21 로 버젼 강제하였습니다. https://ballistic-uncle-12b.notion.site/console-log-243c674068b84f6baf1eededae1cb987?pvs=4
- 미해결대세는 쿠버네티스 [초급~중급]
pod-1 생성 후 curl 응답이 없네요
환경 : aws ec2 / centos 7.9 / v1.22.0 영상보고 pod-1 생성은 된거 같습니다. 그런데 curl 명령 실행시 응답이 없습니다. [root@k8s-master ~]# kubectl get pods -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-system calico-kube-controllers-8fdfc695-69ch4 1/1 Running 1 (4m11s ago) 100m 20.108.82.200 k8s-master <none> <none> calico-system calico-node-84fcs 1/1 Running 1 (4m11s ago) 100m 192.168.56.30 k8s-master <none> <none> calico-system calico-node-pqfwm 1/1 Running 0 98m 192.168.56.31 k8s-node1 <none> <none> calico-system calico-node-tt922 1/1 Running 0 98m 192.168.56.32 k8s-node2 <none> <none> calico-system calico-typha-c477bcd79-2fv8g 1/1 Running 1 (4m4s ago) 100m 192.168.56.30 k8s-master <none> <none> calico-system calico-typha-c477bcd79-6xgkn 1/1 Running 0 98m 192.168.56.31 k8s-node1 <none> <none> calico-system csi-node-driver-4lght 2/2 Running 0 98m 20.111.156.65 k8s-node1 <none> <none> calico-system csi-node-driver-pj7zm 2/2 Running 0 97m 20.109.131.1 k8s-node2 <none> <none> calico-system csi-node-driver-vzfdn 2/2 Running 2 (4m11s ago) 100m 20.108.82.203 k8s-master <none> <none> default pod-1 2/2 Running 0 41m 20.109.131.2 k8s-node2 <none> <none> kube-system coredns-78fcd69978-ncl62 1/1 Running 26 (5m19s ago) 110m 20.108.82.199 k8s-master <none> <none> kube-system coredns-78fcd69978-zdvl2 1/1 Running 1 (4m6s ago) 110m 20.108.82.201 k8s-master <none> <none> kube-system etcd-k8s-master 1/1 Running 1 (4m11s ago) 111m 192.168.56.30 k8s-master <none> <none> kube-system kube-apiserver-k8s-master 1/1 Running 1 (4m1s ago) 111m 192.168.56.30 k8s-master <none> <none> kube-system kube-controller-manager-k8s-master 1/1 Running 1 (4m11s ago) 111m 192.168.56.30 k8s-master <none> <none> kube-system kube-proxy-m82wz 1/1 Running 1 (4m11s ago) 110m 192.168.56.30 k8s-master <none> <none> kube-system kube-proxy-qmhcv 1/1 Running 0 98m 192.168.56.31 k8s-node1 <none> <none> kube-system kube-proxy-vsh5v 1/1 Running 0 98m 192.168.56.32 k8s-node2 <none> <none> kube-system kube-scheduler-k8s-master 1/1 Running 1 (4m11s ago) 111m 192.168.56.30 k8s-master <none> <none> kubernetes-dashboard dashboard-metrics-scraper-856586f554-qwmzq 1/1 Running 1 (4m11s ago) 99m 20.108.82.198 k8s-master <none> <none> kubernetes-dashboard kubernetes-dashboard-5949b5c856-ql8vx 1/1 Running 1 (4m11s ago) 99m 20.108.82.202 k8s-master <none> <none> tigera-operator tigera-operator-cffd8458f-8z85v 1/1 Running 1 (4m11s ago) 100m 192.168.56.30 k8s-master <none> <none> [root@k8s-master ~]# curl 20.109.131.2:8000 ^C [root@k8s-master ~]# [root@k8s-master ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.56.1 0.0.0.0 UG 0 0 0 eth0 20.108.82.192 0.0.0.0 255.255.255.192 U 0 0 0 * 20.108.82.198 0.0.0.0 255.255.255.255 UH 0 0 0 cali6c5795e996a 20.108.82.199 0.0.0.0 255.255.255.255 UH 0 0 0 cali054099bed35 20.108.82.200 0.0.0.0 255.255.255.255 UH 0 0 0 cali7b0ee01ff08 20.108.82.201 0.0.0.0 255.255.255.255 UH 0 0 0 cali2d518c9126c 20.108.82.202 0.0.0.0 255.255.255.255 UH 0 0 0 calib55986b3261 20.108.82.203 0.0.0.0 255.255.255.255 UH 0 0 0 cali1dae5a2bc74 20.109.131.0 192.168.56.32 255.255.255.192 UG 0 0 0 eth0 20.111.156.64 192.168.56.31 255.255.255.192 UG 0 0 0 eth0 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 192.168.56.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 [root@k8s-master ~]# AWS SG에는 ec2간 tcp,udp는 모두 허용했구요.20.0.0.0/8 대역도 tcp,udp 통신 모두 허용했습니다. 어느부분을 더 확인해봐야 할까요?
- 미해결대세는 쿠버네티스 [초급~중급]
Kubernetes Cluster 설치흐름 [v1.15] (강의 실습버전) (음성없음) 문의 드립니다
수고하십니다.다름 아니라 제가 맥환경에서Kubernetes Cluster 설치흐름 [v1.15] (강의 실습버전) (음성없음)실습과정을 따라하던중 virtualbox 7.x 에서 vm으로 환경설정을 3번이나 동일하게 따라했고 다 되었는데요맥 Termius 에서 도저히 버추얼박스내 구축한192.168.0.30번의 가상머신이 ssh로 연결이 안되어서요virtualbox에서 vm 실행해서는 root 비밀번호 쳐서 접속이 되는데 맥용 터미널에서만 도저히 안되네요참고로 저는 인텔칩이구요 왜 그럴까요?Kubernetes Cluster 설치 [v1.22] (최신, 쉬운설치 버전) 은 잘 맥 Termius에서 virtualbox vm에 잘 접속이 됩니다.모든 설정을 그대로 따라 햇는데요...정말로 안되어서 문의 드리오니 문의 부탁드립니다.. ㅜㅜ제가 모르는 맥 설정이 있는지요??
- 미해결대세는 쿠버네티스 [초급~중급]
다른 네임스네이스로 pod 서비스 연결
kind: Ingress apiVersion: networking.k8s.io/v1 metadata: name: test-ing-wan namespace: test-wan spec: ingressClassName: user-ingress-class rules: - host: wan.test.com http: paths: - path: / pathType: Prefix backend: service: name: test-svc-wan port: number: 8080 ----------------------------------------------- kind: Service apiVersion: v1 metadata: name: test-svc-wan namespace: test-wan spec: ports: - protocol: TCP port: 8080 targetPort: 8080 type: ExternalName sessionAffinity: None externalName: test-deploy-lan.test-lan.svc.cluster.local internalTrafficPolicy: Cluster status: loadBalancer: {}kind: Service apiVersion: v1 metadata: name: test-deploy-lan namespace: test-lan spec: ports: - name: http-port protocol: TCP port: 8080 targetPort: 8080 nodePort: 31141 selector: app: test-deploy-lan clusterIP: 10.96.138.89 clusterIPs: - 10.96.138.89 type: NodePort sessionAffinity: None externalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack internalTrafficPolicy: Cluster status: loadBalancer: {}외부망 네임스페이스(test-wan)의 인그레스를 타고 들어와서 내부망 네임스페이스(test-lan)의 서비스에 연결해주려고 하는데 위처럼 502 bad gateway가 뜨며 연결이 되지 않습니다.내부망의 노드포트로 붙으면 서비스 페이지가 동작하는데 도메인으로 연결하면 연결이 되지 않습니다(hosts파일에 IP, 도메인 추가하였습니다)무엇이 문제인지 확인 한번 부탁드립니다.
- 해결됨대세는 쿠버네티스 [초급~중급]
vagrant up 오류 문의
안녕하세요. 실습 환경 만들려고 계속 시도중인데요. 며칠 동안 해결이 안되고 있네요. -환경-윈도우10 virtualbox : 6.1.26vagrant : 2.2.18 https://kubetm.github.io/k8s/02-beginner/cluster-install-case6/이 링크 보면서 설치하고 있거든요. 몇 번을 지우고 재설치해도 성공을 못하고 있습니다.도움 부탁드립니다. \k8s>vagrant upBringing machine 'k8s-master' up with 'virtualbox' provider...Bringing machine 'k8s-node1' up with 'virtualbox' provider...Bringing machine 'k8s-node2' up with 'virtualbox' provider...==> k8s-master: Importing base box 'centos/7'...==> k8s-master: Matching MAC address for NAT networking...==> k8s-master: Checking if box 'centos/7' version '2004.01' is up to date...==> k8s-master: Setting the name of the VM: k8s_k8s-master_1697506308375_93844==> k8s-master: Clearing any previously set network interfaces...==> k8s-master: Preparing network interfaces based on configuration... k8s-master: Adapter 1: nat k8s-master: Adapter 2: hostonly==> k8s-master: Forwarding ports... k8s-master: 22 (guest) => 2222 (host) (adapter 1)==> k8s-master: Running 'pre-boot' VM customizations...==> k8s-master: Booting VM...==> k8s-master: Waiting for machine to boot. This may take a few minutes... k8s-master: SSH address: 127.0.0.1:2222 k8s-master: SSH username: vagrant k8s-master: SSH auth method: private key k8s-master: k8s-master: Vagrant insecure key detected. Vagrant will automatically replace k8s-master: this with a newly generated keypair for better security. k8s-master: k8s-master: Inserting generated public key within guest... k8s-master: Removing insecure key from the guest if it's present... k8s-master: Key inserted! Disconnecting and reconnecting using new SSH key...==> k8s-master: Machine booted and ready![k8s-master] No Virtualbox Guest Additions installation found.Loaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * base: mirrors.cat.net * extras: ftp.riken.jp * updates: ftp.riken.jpResolving Dependencies--> Running transaction check---> Package centos-release.x86_64 0:7-8.2003.0.el7.centos will be updated---> Package centos-release.x86_64 0:7-9.2009.1.el7.centos will be an update--> Finished Dependency ResolutionDependencies Resolved================================================================================ Package Arch Version Repository Size================================================================================Updating: centos-release x86_64 7-9.2009.1.el7.centos updates 27 kTransaction Summary================================================================================Upgrade 1 PackageTotal download size: 27 kDownloading packages:No Presto metadata available for updateswarning: /var/cache/yum/x86_64/7/updates/packages/centos-release-7-9.2009.1.el7.centos.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEYPublic key for centos-release-7-9.2009.1.el7.centos.x86_64.rpm is not installedRetrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7Importing GPG key 0xF4A80EB5: Userid : "CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>" Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5 Package : centos-release-7-8.2003.0.el7.centos.x86_64 (@anaconda) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7Running transaction checkRunning transaction testTransaction test succeededRunning transaction Updating : centos-release-7-9.2009.1.el7.centos.x86_64 1/2 Cleanup : centos-release-7-8.2003.0.el7.centos.x86_64 2/2 Verifying : centos-release-7-9.2009.1.el7.centos.x86_64 1/2 Verifying : centos-release-7-8.2003.0.el7.centos.x86_64 2/2Updated: centos-release.x86_64 0:7-9.2009.1.el7.centosComplete!Loaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * base: mirrors.cat.net * extras: ftp.riken.jp * updates: ftp.riken.jphttp://vault.centos.org/7.0.1406/os/x86_64/repodata/repomd.xml: [Errno 14] HTTPS Error 301 - Moved PermanentlyTrying other mirror. One of the configured repositories failed (CentOS-7.0.1406 - Base), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=C7.0.1406-base ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable C7.0.1406-base or subscription-manager repos --disable=C7.0.1406-base 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=C7.0.1406-base.skip_if_unavailable=truefailure: repodata/repomd.xml from C7.0.1406-base: [Errno 256] No more mirrors to try.http://vault.centos.org/7.0.1406/os/x86_64/repodata/repomd.xml: [Errno 14] HTTPS Error 301 - Moved Permanently==> k8s-master: Checking for guest additions in VM... k8s-master: No guest additions were detected on the base box for this VM! Guest k8s-master: additions are required for forwarded ports, shared folders, host only k8s-master: networking, and more. If SSH fails on this machine, please install k8s-master: the guest additions and repackage the box to continue. k8s-master: k8s-master: This is not an error message; everything may continue to work properly, k8s-master: in which case you may ignore this message.The following SSH command responded with a non-zero exit status.Vagrant assumes that this means the command failed!yum install -y kernel-devel-`uname -r` --enablerepo=C*-base --enablerepo=C*-updatesStdout from the command:Loaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * base: mirrors.cat.net * extras: ftp.riken.jp * updates: ftp.riken.jpStderr from the command:http://vault.centos.org/7.0.1406/os/x86_64/repodata/repomd.xml: [Errno 14] HTTPS Error 301 - Moved PermanentlyTrying other mirror. One of the configured repositories failed (CentOS-7.0.1406 - Base), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=C7.0.1406-base ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable C7.0.1406-base or subscription-manager repos --disable=C7.0.1406-base 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=C7.0.1406-base.skip_if_unavailable=truefailure: repodata/repomd.xml from C7.0.1406-base: [Errno 256] No more mirrors to try.http://vault.centos.org/7.0.1406/os/x86_64/repodata/repomd.xml: [Errno 14] HTTPS Error 301 - Moved PermanentlyC:\Users\SungwookCho\k8s>
- 미해결대세는 쿠버네티스 [초급~중급]
3-3)Pod 실습
3-3)Pod 부분 실습을 할 때 mountPath : /mount3 으로 지정을 했는데, 이론이나 실습하실 때 설명이나 해당 path 가 이전에 만들어져 있어야 한다고 했습니다. 근데 실습에서 말씀하실 때 hostPath 실습에서 path 를 만들어줬기 때문에 해당 pod 가 정상적으로 만들어진다고 하셨는데, hostPath 실습을 할 때 /mount3 을 만들어주는 실습이 없지 않았나요..? 분명 hostPath 실습에서 /mount3 를 만드는 실습이 없었던 거 같은데 왜 정상적으로 실행이 되는지 궁금합니다.