블로그
전체 9#카테고리
- 데브옵스 · 인프라
#태그
- devops
- kubernetes
- 후기
- docker
- ctr
- build
- 3주차
- 배포
- k8s
- hpa
- pv
- pvc
- service
- rolling
- SPG
- 인프런
- 쿠버네티스
- 회고
- sre
- secret
- configmap
- DevOps
- SRE
- 일프로
- Probe
- msa
- Kubernetes
- MSA
2025. 06. 22.
1
[SPG-박상준] 4주차 발자국
어느덧 마지막 후기를 남기게 되었습니다. 금융권에 종사하다 보니 수업을 듣는 것 자체가 결코 쉬운 일은 아니었지만, 우연히 맡게 된 리더 역할 덕분에 함께 잘 따라와 주신 분들을 위해서라도 퇴근 후 시간을 내어 열심히 참여했던 기억이 남습니다. ‘첫 술에 배부를 수 없다’는 말이 떠오릅니다.이 수업은 반복해서 들을수록 실무에 더욱 효과적으로 적용할 수 있다는 점이 큰 장점입니다. 실제 업무 환경은 제각각이며 사용하는 솔루션도 다양하지만, 본질은 변하지 않습니다.결국 중요한 것은 ‘동작의 원리를 정확히 이해하는 것’이며, 그 원리를 알고 있다면 솔루션이 달라져도 응용은 동일하게 가능합니다. 이 수업은 그러한 원리를 명확히 파악하는 데에 큰 도움이 되는 강의였습니다.
데브옵스 · 인프라
・
devops
・
kubernetes
・
후기
2025. 06. 15.
1
[인프런 워밍업 클럽 4기 - DevOps] 미션 5
사전 준비사항# 도커 파일 및 App 소스 다운로드 curl -O https://raw.githubusercontent.com/k8s-1pro/install/main/ground/etc/docker/Dockerfile curl -O https://raw.githubusercontent.com/k8s-1pro/install/main/ground/etc/docker/hello.js [root@cicd-server ~]# ls Dockerfile hello.js전체 실습 명령어docker build -t golreas/hello:1.0.0 .[+] Building 12.1s (8/8) FINISHED docker:default => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 154B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/node:slim 2.3s => [auth] library/node:pull token for registry-1.docker.io 0.0s => [internal] load build context 0.0s => => transferring context: 272B 0.0s => [1/2] FROM docker.io/library/node:slim@sha256:b30c143a092c7dced8e17ad67a8783c03234d4844ee84c39090c9780491aaf89 9.5s => => resolve docker.io/library/node:slim@sha256:b30c143a092c7dced8e17ad67a8783c03234d4844ee84c39090c9780491aaf89 0.0s => => sha256:85878ac12a824d35ede83635c5aa0a6b4c83fe0b8fa5fb125e1fc839a5af01a7 6.59kB / 6.59kB 0.0s => => sha256:34ef2a75627f6089e01995bfd3b3786509bbdc7cfb4dbc804b642e195340dbc9 28.08MB / 28.08MB 7.8s => => sha256:00b6bc59183634774862a1f5d9fa777966ffdd8b4edd6fe07006671358dfc249 3.31kB / 3.31kB 0.5s => => sha256:7293ae927b976710c33b54ae3957471f36b9e1150408853c3dfbd7baff3f59d1 50.52MB / 50.52MB 7.6s => => sha256:b30c143a092c7dced8e17ad67a8783c03234d4844ee84c39090c9780491aaf89 5.20kB / 5.20kB 0.0s => => sha256:af442a7998c3f3a985309cfa7b709ea8d3f1911ea19a598f1f1a2e158273c73e 1.93kB / 1.93kB 0.0s => => sha256:148b7926ba2143f7dbd1efaab45bd08b5fde13f01510d1319ee7cd0aa781f8d0 1.71MB / 1.71MB 1.9s => => sha256:0a5428d7ed1bdde6d0638d39b519fcd3307eb60e70ba9f220d1066b39a71de93 447B / 447B 2.1s => => extracting sha256:34ef2a75627f6089e01995bfd3b3786509bbdc7cfb4dbc804b642e195340dbc9 0.6s => => extracting sha256:00b6bc59183634774862a1f5d9fa777966ffdd8b4edd6fe07006671358dfc249 0.0s => => extracting sha256:7293ae927b976710c33b54ae3957471f36b9e1150408853c3dfbd7baff3f59d1 0.7s => => extracting sha256:148b7926ba2143f7dbd1efaab45bd08b5fde13f01510d1319ee7cd0aa781f8d0 0.1s => => extracting sha256:0a5428d7ed1bdde6d0638d39b519fcd3307eb60e70ba9f220d1066b39a71de93 0.0s => [2/2] COPY hello.js . 0.2s => exporting to image 0.0s => => exporting layers 0.0s => => writing image sha256:f8812cc66e7be6bd8a78ca25a7701407a6aa40bf06d11ca572f61d63c91944a6 0.0s => => naming to docker.io/golreas/hello:1.0.0$ docker image list REPOSITORY TAG IMAGE ID CREATED SIZE golreas/hello 1.0.0 f8812cc66e7b 48 seconds ago 249MB golreas/api-tester v1.0.0 9438a37e6182 3 hours ago 520MB# docker login -u golreas Password: WARNING! Your password will be stored unencrypted in /var/lib/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded [jenkins@cicd-server ~]$ docker push golreas/hello:1.0.0 The push refers to repository [docker.io/golreas/hello] 84cd54ae51c5: Pushed a04dc377afe1: Mounted from library/node 1b2a793e9797: Mounted from library/node 0fa9dab4f369: Mounted from library/node abb3903f11f9: Mounted from library/node 6edfb9bfff29: Mounted from library/node 1.0.0: digest: sha256:9e8c2be45e8618f075510b98d7e554d599c3ba8ed1f083faedcee243aff8e9c8 size: 1574 docker rmi golreas/hello:1.0.0 Untagged: golreas/hello:1.0.0 [jenkins@cicd-server ~] docker pull golreas/hello:1.0.0 1.0.0: Pulling from golreas/hello Digest: sha256:9e8c2be45e8618f075510b98d7e554d599c3ba8ed1f083faedcee243aff8e9c8 Status: Downloaded newer image for golreas/hello:1.0.0 docker.io/golreas/hello:1.0.0 docker save -o file.tar golreas/hello:1.0.0 [jenkins@cicd-server ~]$ docker load -i file.tar Loaded image: golreas/hello:1.0.0 빌드$ docker build -t golreas/hello:1.0.0 . [+] Building 1.7s (8/8) FINISHED docker:default => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 154B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/node:slim 1.7s => [auth] library/node:pull token for registry-1.docker.io 0.0s => [internal] load build context 0.0s => => transferring context: 87B 0.0s => [1/2] FROM docker.io/library/node:slim@sha256:b30c143a092c7dced8e17ad67a8783c03234d4844ee84c39090c9780491aaf89 0.0s => CACHED [2/2] COPY hello.js . 0.0s => exporting to image 0.0s => => exporting layers 0.0s => => writing image sha256:f8812cc66e7be6bd8a78ca25a7701407a6aa40bf06d11ca572f61d63c91944a6 0.0s => => naming to docker.io/golreas/hello:1.0.0이미지 리스트 조회$ docker image list REPOSITORY TAG IMAGE ID CREATED SIZE golreas/hello 1.0.0 f8812cc66e7b 10 minutes ago 249MB golreas/api-tester v1.0.0 9438a37e6182 3 hours ago 520MB태그변경docker tag golreas/hello:1.0.0 golreas/hello:2.0.0 $ docker image list REPOSITORY TAG IMAGE ID CREATED SIZE golreas/hello 1.0.0 f8812cc66e7b 13 minutes ago 249MB golreas/hello 2.0.0 f8812cc66e7b 13 minutes ago 249MB golreas/api-tester v1.0.0 9438a37e6182 4 hours ago 520MB로그인docker login -u golreas Password: WARNING! Your password will be stored unencrypted in /var/lib/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded이미지 업로드docker push golreas/hello:1.0.0 The push refers to repository [docker.io/golreas/hello] 84cd54ae51c5: Layer already exists a04dc377afe1: Layer already exists 1b2a793e9797: Layer already exists 0fa9dab4f369: Layer already exists abb3903f11f9: Layer already exists 6edfb9bfff29: Layer already exists 1.0.0: digest: sha256:9e8c2be45e8618f075510b98d7e554d599c3ba8ed1f083faedcee243aff8e9c8 size: 1574이미지 다운로드docker pull golreas/hello:1.0.0 1.0.0: Pulling from golreas/hello Digest: sha256:9e8c2be45e8618f075510b98d7e554d599c3ba8ed1f083faedcee243aff8e9c8 Status: Image is up to date for golreas/hello:1.0.0 docker.io/golreas/hello:1.0.0이미지 -> 파일로 변환docker save -o file.tar golreas/hello:1.0.0 [jenkins@cicd-server ~]$ ls -l file.tar -rw-------. 1 jenkins jenkins 255018496 Jun 15 00:37 file.tar이미지 삭제docker rmi golreas/hello:1.0.0 Untagged: golreas/hello:1.0.0파일 -> 이미지로 변환docker load -i file.tar Loaded image: golreas/hello:1.0.0 [jenkins@cicd-server ~]$ docker image list REPOSITORY TAG IMAGE ID CREATED SIZE golreas/hello 1.0.0 f8812cc66e7b 19 minutes ago 249MB golreas/hello 2.0.0 f8812cc66e7b 19 minutes ago 249MB golreas/api-tester v1.0.0 9438a37e6182 4 hours ago 520MB정리docker rmi golreas/hello:1.0.0 Untagged: golreas/hello:1.0.0 [jenkins@cicd-server ~]$ rm file.tar Containerdctr ns list NAME LABELS k8s.io특정 네임스페이스 내 이미지 조회ctr -n k8s.io image list REF TYPE DIGEST SIZE PLATFORMS LABELS docker.io/1pro/api-tester:v1.0.0 application/vnd.oci.image.index.v1+json sha256:6b38dd347b66c7f14c393280a040831a72b4a93fd5beddc011ee852c26f35058 247.8 MiB linux/amd64,linux/arm64,unknown/unknown io.cri-containerd.image=managed docker.io/1pro/api-tester:v2.0.0 application/vnd.oci.image.index.v1+json sha256:eed09de27648c5e13a7978069e1af63908bf4c6fd023d73de993e8b6abf556b4 247.8 MiB linux/amd64,linux/arm64,unknown/unknown io.cri-containerd.image=managed docker.io/1pro/api-tester@sha256:6b38dd347b66c7f14c393280a040831a72b4a93fd5beddc011ee852c26f35058 application/vnd.oci.image.index.v1+json sha256:6b38dd347b66c7f14c393280a040831a72b4a93fd5beddc011ee852c26f35058 247.8 MiB linux/amd64,linux/arm64,unknown/unknown io.cri-containerd.image=managed docker.io/1pro/api-tester@sha256:eed09de27648c5e13a7978069e1af63908bf4c6fd023d73de993e8b6abf556b4 application/vnd.oci.image.index.v1+json sha256:eed09de27648c5e13a7978069e1af63908bf4c6fd023d73de993e8b6abf556b4 247.8 MiB linux/amd64,linux/arm64,unknown/unknown io.cri-containerd.image=managed docker.io/1pro/app-error:latest application/vnd.oci.image.index.v1+json sha256:cb23f9634d689a4fd2c34c2132f26ddc2361f15bc6320f9682304e3503ca0056 247.8 MiB linux/amd64,linux/arm64,unknown/unknown io.cri-containerd.image=managed docker.io/1pro/app-error@sha256:cb23f9634d689a4fd2c34c2132f26ddc2361f15bc6320f9682304e3503ca0056 application/vnd.oci.image.index.v1+json sha256:cb23f9634d689a4fd2c34c2132f26ddc2361f15bc6320f9682304e3503ca0056 247.8 MiB linux/amd64,linux/arm64,unknown/unknown io.cri-containerd.image=managed docker.io/1pro/app-update:latest application/vnd.oci.image.index.v1+json sha256:37b78640822e2563ecab155f691a2eef977472745ea09f6013e0e7f5402d64a9 247.8 MiB linux/amd64,linux/arm64,unknown/unknown io.cri-containerd.image=managed docker.io/1pro/app-update@sha256:37b78640822e2563ecab155f691a2eef977472745ea09f6013e0e7f5402d64a9 application/vnd.oci.image.index.v1+json sha256:37b78640822e2563ecab155f691a2eef977472745ea09f6013e0e7f5402d64a9 247.8 MiB linux/amd64,linux/arm64,unknown/unknown io.cri-containerd.image=managed docker.io/1pro/app:latest application/vnd.oci.image.index.v1+json sha256:9d81d340d25b6bf7ec48e742cc149c170cdf8c94263da540a7d7034be476bd6b 247.8 MiB linux/amd64,linux/arm64,unknown/unknown io.cri-containerd.image=managed docker.io/1pro/app@sha256:9d81d340d25b6bf7ec48e742cc149c170cdf8c94263da540a7d7034be476bd6b application/vnd.oci.image.index.v1+json sha256:9d81d340d25b6bf7ec48e742cc149c170cdf8c94263da540a7d7034be476bd6b 247.8 MiB linux/amd64,linux/arm64,unknown/unknown io.cri-containerd.image=managed docker.io/calico/apiserver:v3.26.4 application/vnd.docker.distribution.manifest.list.v2+json sha256:c520f71091cd09a9c9628a4e010f6fbc6118da9573af46af5b3f4c3ed8d463dc 34.9 MiB linux/amd64,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed docker.io/calico/apiserver@sha256:c520f71091cd09a9c9628a4e010f6fbc6118da9573af46af5b3f4c3ed8d463dc application/vnd.docker.distribution.manifest.list.v2+json sha256:c520f71091cd09a9c9628a4e010f6fbc6118da9573af46af5b3f4c3ed8d463dc 34.9 MiB linux/amd64,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed docker.io/calico/cni:v3.26.4 application/vnd.docker.distribution.manifest.list.v2+json sha256:7c5895c5d6ed3266bcd405fbcdbb078ca484688673c3479f0f18bf072d58c242 82.2 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed docker.io/calico/cni@sha256:7c5895c5d6ed3266bcd405fbcdbb078ca484688673c3479f0f18bf072d58c242 application/vnd.docker.distribution.manifest.list.v2+json sha256:7c5895c5d6ed3266bcd405fbcdbb078ca484688673c3479f0f18bf072d58c242 82.2 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed docker.io/calico/csi:v3.26.4 application/vnd.docker.distribution.manifest.list.v2+json sha256:0ab0fafee845c82c1a31bc2a3d5df29768626d570fbbead4813ad0da4a4ebf4b 9.2 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed docker.io/calico/csi@sha256:0ab0fafee845c82c1a31bc2a3d5df29768626d570fbbead4813ad0da4a4ebf4b application/vnd.docker.distribution.manifest.list.v2+json sha256:0ab0fafee845c82c1a31bc2a3d5df29768626d570fbbead4813ad0da4a4ebf4b 9.2 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed docker.io/calico/kube-controllers:v3.26.4 application/vnd.docker.distribution.manifest.list.v2+json sha256:5fce14b4dfcd63f1a4663176be4f236600b410cd896d054f56291c566292c86e 28.0 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed docker.io/calico/kube-controllers@sha256:5fce14b4dfcd63f1a4663176be4f236600b410cd896d054f56291c566292c86e application/vnd.docker.distribution.manifest.list.v2+json sha256:5fce14b4dfcd63f1a4663176be4f236600b410cd896d054f56291c566292c86e 28.0 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed docker.io/calico/node-driver-registrar:v3.26.4 application/vnd.docker.distribution.manifest.list.v2+json sha256:77db9df0ecd41c514d8dcab3b2681091f98f8d70e29a03df12c086a4e032639b 11.4 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed docker.io/calico/node-driver-registrar@sha256:77db9df0ecd41c514d8dcab3b2681091f98f8d70e29a03df12c086a4e032639b application/vnd.docker.distribution.manifest.list.v2+json sha256:77db9df0ecd41c514d8dcab3b2681091f98f8d70e29a03df12c086a4e032639b 11.4 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed docker.io/calico/node:v3.26.4 application/vnd.docker.distribution.manifest.list.v2+json sha256:a8b77a5f27b167501465f7f5fb7601c44af4df8dccd1c7201363bbb301d1fe40 83.6 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed docker.io/calico/node@sha256:a8b77a5f27b167501465f7f5fb7601c44af4df8dccd1c7201363bbb301d1fe40 application/vnd.docker.distribution.manifest.list.v2+json sha256:a8b77a5f27b167501465f7f5fb7601c44af4df8dccd1c7201363bbb301d1fe40 83.6 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed docker.io/calico/pod2daemon-flexvol:v3.26.4 application/vnd.docker.distribution.manifest.list.v2+json sha256:cf169a0c328a5b4f2dc96b224c3cf6dbc2c8269c6ecafac54bc1de00102b665e 5.4 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed docker.io/calico/pod2daemon-flexvol@sha256:cf169a0c328a5b4f2dc96b224c3cf6dbc2c8269c6ecafac54bc1de00102b665e application/vnd.docker.distribution.manifest.list.v2+json sha256:cf169a0c328a5b4f2dc96b224c3cf6dbc2c8269c6ecafac54bc1de00102b665e 5.4 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed docker.io/calico/typha:v3.26.4 application/vnd.docker.distribution.manifest.list.v2+json sha256:ebe99272d38ff65255c1fba33c17d10f588b612625b19c68fe5aeed0f134fa74 24.7 MiB linux/amd64,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed docker.io/calico/typha@sha256:ebe99272d38ff65255c1fba33c17d10f588b612625b19c68fe5aeed0f134fa74 application/vnd.docker.distribution.manifest.list.v2+json sha256:ebe99272d38ff65255c1fba33c17d10f588b612625b19c68fe5aeed0f134fa74 24.7 MiB linux/amd64,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed docker.io/grafana/grafana:9.5.2 application/vnd.docker.distribution.manifest.list.v2+json sha256:39c849cebccccb22c0a5194f07c535669386190e029aa440ad535226974a5809 78.2 MiB linux/amd64,linux/arm/v7,linux/arm64/v8 io.cri-containerd.image=managed docker.io/grafana/grafana@sha256:39c849cebccccb22c0a5194f07c535669386190e029aa440ad535226974a5809 application/vnd.docker.distribution.manifest.list.v2+json sha256:39c849cebccccb22c0a5194f07c535669386190e029aa440ad535226974a5809 78.2 MiB linux/amd64,linux/arm/v7,linux/arm64/v8 io.cri-containerd.image=managed docker.io/grafana/loki:2.6.1 application/vnd.docker.distribution.manifest.list.v2+json sha256:1ee60f980950b00e505bd564b40f720132a0653b110e993043bb5940673d060a 17.7 MiB linux/amd64,linux/arm/v7,linux/arm64/v8 io.cri-containerd.image=managed docker.io/grafana/loki@sha256:1ee60f980950b00e505bd564b40f720132a0653b110e993043bb5940673d060a application/vnd.docker.distribution.manifest.list.v2+json sha256:1ee60f980950b00e505bd564b40f720132a0653b110e993043bb5940673d060a 17.7 MiB linux/amd64,linux/arm/v7,linux/arm64/v8 io.cri-containerd.image=managed docker.io/grafana/promtail:2.7.4 application/vnd.docker.distribution.manifest.list.v2+json sha256:db66221bcc9510f3101121d42354b19c83cb810c5480e4936eb75c43443656f4 65.8 MiB linux/amd64,linux/arm/v7,linux/arm64/v8 io.cri-containerd.image=managed docker.io/grafana/promtail@sha256:db66221bcc9510f3101121d42354b19c83cb810c5480e4936eb75c43443656f4 application/vnd.docker.distribution.manifest.list.v2+json sha256:db66221bcc9510f3101121d42354b19c83cb810c5480e4936eb75c43443656f4 65.8 MiB linux/amd64,linux/arm/v7,linux/arm64/v8 io.cri-containerd.image=managed docker.io/kubernetesui/dashboard:v2.7.0 application/vnd.docker.distribution.manifest.list.v2+json sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 70.7 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 application/vnd.docker.distribution.manifest.list.v2+json sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 70.7 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed docker.io/kubernetesui/metrics-scraper:v1.0.8 application/vnd.docker.distribution.manifest.list.v2+json sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c 17.5 MiB linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c application/vnd.docker.distribution.manifest.list.v2+json sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c 17.5 MiB linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed quay.io/brancz/kube-rbac-proxy:v0.14.1 application/vnd.docker.distribution.manifest.list.v2+json sha256:58d91a5faaf8f8222f8aa6c0a170826bbabcc60eedab71afd2326548cde84171 21.9 MiB linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed quay.io/brancz/kube-rbac-proxy@sha256:58d91a5faaf8f8222f8aa6c0a170826bbabcc60eedab71afd2326548cde84171 application/vnd.docker.distribution.manifest.list.v2+json sha256:58d91a5faaf8f8222f8aa6c0a170826bbabcc60eedab71afd2326548cde84171 21.9 MiB linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed quay.io/prometheus-operator/prometheus-config-reloader:v0.65.2 application/vnd.docker.distribution.manifest.list.v2+json sha256:18632ea5cff38cda5b08054057297e527dcfc144a5f195c1c836a0805a9bbad1 4.8 MiB linux/amd64,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x io.cri-containerd.image=managed quay.io/prometheus-operator/prometheus-config-reloader@sha256:18632ea5cff38cda5b08054057297e527dcfc144a5f195c1c836a0805a9bbad1 application/vnd.docker.distribution.manifest.list.v2+json sha256:18632ea5cff38cda5b08054057297e527dcfc144a5f195c1c836a0805a9bbad1 4.8 MiB linux/amd64,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x io.cri-containerd.image=managed quay.io/prometheus-operator/prometheus-operator:v0.65.2 application/vnd.docker.distribution.manifest.list.v2+json sha256:5c3da991d54f5ff9b84e5a1fb55110b4de7fcd00723367eff6f90392ad01e79b 14.7 MiB linux/amd64,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x io.cri-containerd.image=managed quay.io/prometheus-operator/prometheus-operator@sha256:5c3da991d54f5ff9b84e5a1fb55110b4de7fcd00723367eff6f90392ad01e79b application/vnd.docker.distribution.manifest.list.v2+json sha256:5c3da991d54f5ff9b84e5a1fb55110b4de7fcd00723367eff6f90392ad01e79b 14.7 MiB linux/amd64,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x io.cri-containerd.image=managed quay.io/prometheus/node-exporter:v1.6.0 application/vnd.docker.distribution.manifest.list.v2+json sha256:d2e48098c364e61ee62d9016eed863b66331d87cf67146f2068b70ed9d9b4f98 10.5 MiB linux/amd64,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x io.cri-containerd.image=managed quay.io/prometheus/node-exporter@sha256:d2e48098c364e61ee62d9016eed863b66331d87cf67146f2068b70ed9d9b4f98 application/vnd.docker.distribution.manifest.list.v2+json sha256:d2e48098c364e61ee62d9016eed863b66331d87cf67146f2068b70ed9d9b4f98 10.5 MiB linux/amd64,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x io.cri-containerd.image=managed quay.io/prometheus/prometheus:v2.44.0 application/vnd.docker.distribution.manifest.list.v2+json sha256:0f0b7feb6f02620df7d493ad7437b6ee95b6d16d8d18799f3607124e501444b1 83.4 MiB linux/amd64,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x io.cri-containerd.image=managed quay.io/prometheus/prometheus@sha256:0f0b7feb6f02620df7d493ad7437b6ee95b6d16d8d18799f3607124e501444b1 application/vnd.docker.distribution.manifest.list.v2+json sha256:0f0b7feb6f02620df7d493ad7437b6ee95b6d16d8d18799f3607124e501444b1 83.4 MiB linux/amd64,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x io.cri-containerd.image=managed quay.io/tigera/operator:v1.30.9 application/vnd.docker.distribution.manifest.list.v2+json sha256:431f037ff18b5c867d01312e42671effc55602421aeed25dd3f6109f70596b4a 18.0 MiB linux/amd64,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed quay.io/tigera/operator@sha256:431f037ff18b5c867d01312e42671effc55602421aeed25dd3f6109f70596b4a application/vnd.docker.distribution.manifest.list.v2+json sha256:431f037ff18b5c867d01312e42671effc55602421aeed25dd3f6109f70596b4a 18.0 MiB linux/amd64,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed registry.k8s.io/coredns/coredns:v1.10.1 application/vnd.docker.distribution.manifest.list.v2+json sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e 13.9 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/mips64le,linux/ppc64le,linux/s390x io.cri-containerd.image=managed registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e application/vnd.docker.distribution.manifest.list.v2+json sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e 13.9 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/mips64le,linux/ppc64le,linux/s390x io.cri-containerd.image=managed registry.k8s.io/etcd:3.5.7-0 application/vnd.docker.distribution.manifest.list.v2+json sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 76.9 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 application/vnd.docker.distribution.manifest.list.v2+json sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 76.9 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed registry.k8s.io/kube-apiserver:v1.27.2 application/vnd.docker.distribution.manifest.list.v2+json sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9 29.0 MiB linux/amd64,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9 application/vnd.docker.distribution.manifest.list.v2+json sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9 29.0 MiB linux/amd64,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed registry.k8s.io/kube-controller-manager:v1.27.2 application/vnd.docker.distribution.manifest.list.v2+json sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56 26.9 MiB linux/amd64,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56 application/vnd.docker.distribution.manifest.list.v2+json sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56 26.9 MiB linux/amd64,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed registry.k8s.io/kube-proxy:v1.27.2 application/vnd.docker.distribution.manifest.list.v2+json sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f 20.4 MiB linux/amd64,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f application/vnd.docker.distribution.manifest.list.v2+json sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f 20.4 MiB linux/amd64,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed registry.k8s.io/kube-scheduler:v1.27.2 application/vnd.docker.distribution.manifest.list.v2+json sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177 15.8 MiB linux/amd64,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177 application/vnd.docker.distribution.manifest.list.v2+json sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177 15.8 MiB linux/amd64,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.9.2 application/vnd.docker.distribution.manifest.list.v2+json sha256:5ac2e67a862cd3baa0eb4fd7683d54928fd76ea3a61cde50508922c956901d8c 11.5 MiB linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:5ac2e67a862cd3baa0eb4fd7683d54928fd76ea3a61cde50508922c956901d8c application/vnd.docker.distribution.manifest.list.v2+json sha256:5ac2e67a862cd3baa0eb4fd7683d54928fd76ea3a61cde50508922c956901d8c 11.5 MiB linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed registry.k8s.io/metrics-server/metrics-server:v0.6.3 application/vnd.docker.distribution.manifest.list.v2+json sha256:c60778fa1c44d0c5a0c4530ebe83f9243ee6fc02f4c3dc59226c201931350b10 26.7 MiB linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed registry.k8s.io/metrics-server/metrics-server@sha256:c60778fa1c44d0c5a0c4530ebe83f9243ee6fc02f4c3dc59226c201931350b10 application/vnd.docker.distribution.manifest.list.v2+json sha256:c60778fa1c44d0c5a0c4530ebe83f9243ee6fc02f4c3dc59226c201931350b10 26.7 MiB linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed registry.k8s.io/pause:3.6 application/vnd.docker.distribution.manifest.list.v2+json sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db 247.6 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed registry.k8s.io/pause:3.9 application/vnd.docker.distribution.manifest.list.v2+json sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 261.8 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db application/vnd.docker.distribution.manifest.list.v2+json sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db 247.6 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 application/vnd.docker.distribution.manifest.list.v2+json sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 261.8 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed registry.k8s.io/prometheus-adapter/prometheus-adapter:v0.10.0 application/vnd.docker.distribution.manifest.list.v2+json sha256:2f34cb3a04a0fee6034f4d63ce3ee7786c0f762dc9f3bf196c70e894dd92edd1 26.4 MiB linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed registry.k8s.io/prometheus-adapter/prometheus-adapter@sha256:2f34cb3a04a0fee6034f4d63ce3ee7786c0f762dc9f3bf196c70e894dd92edd1 application/vnd.docker.distribution.manifest.list.v2+json sha256:2f34cb3a04a0fee6034f4d63ce3ee7786c0f762dc9f3bf196c70e894dd92edd1 26.4 MiB linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:031fce34fb773858da957c7d5d550cdd5050d11a61b81ce637f5c8e757cd9569 application/vnd.docker.distribution.manifest.list.v2+json sha256:77db9df0ecd41c514d8dcab3b2681091f98f8d70e29a03df12c086a4e032639b 11.4 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8 application/vnd.docker.distribution.manifest.list.v2+json sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 70.7 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737 application/vnd.docker.distribution.manifest.list.v2+json sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 76.9 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed sha256:29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0 application/vnd.docker.distribution.manifest.list.v2+json sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f 20.4 MiB linux/amd64,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:2d68052f05879837d499699bebc3039f71b65c486fce80da7b120e134ba3181c application/vnd.docker.distribution.manifest.list.v2+json sha256:1ee60f980950b00e505bd564b40f720132a0653b110e993043bb5940673d060a 17.7 MiB linux/amd64,linux/arm/v7,linux/arm64/v8 io.cri-containerd.image=managed sha256:2e8b6dfeda0f17c6856f93d62f115266ce424ec2ddc8c6e5c06af3664d8e66a9 application/vnd.docker.distribution.manifest.list.v2+json sha256:5fce14b4dfcd63f1a4663176be4f236600b410cd896d054f56291c566292c86e 28.0 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4 application/vnd.docker.distribution.manifest.list.v2+json sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56 26.9 MiB linux/amd64,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840 application/vnd.docker.distribution.manifest.list.v2+json sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177 15.8 MiB linux/amd64,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:320d6bd226c920f6876939f87cf5d81ea00de92d4e20d226ca73562c1a1a88f6 application/vnd.oci.image.index.v1+json sha256:6b38dd347b66c7f14c393280a040831a72b4a93fd5beddc011ee852c26f35058 247.8 MiB linux/amd64,linux/arm64,unknown/unknown io.cri-containerd.image=managed sha256:4287d3e56fdcbd36285cac0097cc79633be15d5d3ea7404ee3dd810da4804747 application/vnd.docker.distribution.manifest.list.v2+json sha256:7c5895c5d6ed3266bcd405fbcdbb078ca484688673c3479f0f18bf072d58c242 82.2 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:4c70d5849a8e83d95ed06d935963781239e22ce0e201cf5947149cf65c22e253 application/vnd.docker.distribution.manifest.list.v2+json sha256:39c849cebccccb22c0a5194f07c535669386190e029aa440ad535226974a5809 78.2 MiB linux/amd64,linux/arm/v7,linux/arm64/v8 io.cri-containerd.image=managed sha256:533bb34ce453f380f28c5c78664c7184ce2ef060c3be66da472cdf1b7fd7200c application/vnd.docker.distribution.manifest.list.v2+json sha256:0f0b7feb6f02620df7d493ad7437b6ee95b6d16d8d18799f3607124e501444b1 83.4 MiB linux/amd64,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:6a33998eca8a7ef8cbb574892c6f07420668b90164886cd09a54192a0bef91a2 application/vnd.docker.distribution.manifest.list.v2+json sha256:d2e48098c364e61ee62d9016eed863b66331d87cf67146f2068b70ed9d9b4f98 10.5 MiB linux/amd64,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae application/vnd.docker.distribution.manifest.list.v2+json sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9 29.0 MiB linux/amd64,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e application/vnd.docker.distribution.manifest.list.v2+json sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db 247.6 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e application/vnd.docker.distribution.manifest.list.v2+json sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 261.8 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed sha256:8665dd71a6e2c4c0947daaae0dc78274b869aaa50860191480c328e2cb359f49 application/vnd.oci.image.index.v1+json sha256:cb23f9634d689a4fd2c34c2132f26ddc2361f15bc6320f9682304e3503ca0056 247.8 MiB linux/amd64,linux/arm64,unknown/unknown io.cri-containerd.image=managed sha256:8779573e497ae7fe07a121a96f3b60d262869c1803a14459e9d203ccbabbd77d application/vnd.docker.distribution.manifest.list.v2+json sha256:431f037ff18b5c867d01312e42671effc55602421aeed25dd3f6109f70596b4a 18.0 MiB linux/amd64,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:8e22bf689cda7eb34081c7bed8d3c97fac366b3d9b60c829a6719249f4684cd8 application/vnd.docker.distribution.manifest.list.v2+json sha256:c60778fa1c44d0c5a0c4530ebe83f9243ee6fc02f4c3dc59226c201931350b10 26.7 MiB linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108 application/vnd.docker.distribution.manifest.list.v2+json sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e 13.9 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/mips64le,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:9e03e9fd536c2fc127937e4346c4bc08918fc700a35e687f1e440525a76937e7 application/vnd.docker.distribution.manifest.list.v2+json sha256:5ac2e67a862cd3baa0eb4fd7683d54928fd76ea3a61cde50508922c956901d8c 11.5 MiB linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a application/vnd.docker.distribution.manifest.list.v2+json sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c 17.5 MiB linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:abd2f13a5030c85c80dfa7b02f886470991f63f2bcf028f726e469ada5b493f6 application/vnd.oci.image.index.v1+json sha256:eed09de27648c5e13a7978069e1af63908bf4c6fd023d73de993e8b6abf556b4 247.8 MiB linux/amd64,linux/arm64,unknown/unknown io.cri-containerd.image=managed sha256:b15a8d2801f74e271b79a7a4eef64daec0de7e18be96506b34343e4d23ae639f application/vnd.docker.distribution.manifest.list.v2+json sha256:5c3da991d54f5ff9b84e5a1fb55110b4de7fcd00723367eff6f90392ad01e79b 14.7 MiB linux/amd64,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:b4203935a9aeef74bbacbc7aea95f4dd36b20c61acdb93eae4f7f74cd39addbd application/vnd.docker.distribution.manifest.list.v2+json sha256:db66221bcc9510f3101121d42354b19c83cb810c5480e4936eb75c43443656f4 65.8 MiB linux/amd64,linux/arm/v7,linux/arm64/v8 io.cri-containerd.image=managed sha256:bd0140db083c4da0da65d29eac4301b34ad202134b1ef869e17ec747e1618682 application/vnd.docker.distribution.manifest.list.v2+json sha256:0ab0fafee845c82c1a31bc2a3d5df29768626d570fbbead4813ad0da4a4ebf4b 9.2 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:c727f7a87f98196c6b36fb4eb982eb1a290cb625d8239b9d0424d9f5207997e1 application/vnd.docker.distribution.manifest.list.v2+json sha256:2f34cb3a04a0fee6034f4d63ce3ee7786c0f762dc9f3bf196c70e894dd92edd1 26.4 MiB linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:d205b93c448bf3230bd9514e6d5ea32f951552c96b3693c52e5b6aaab280d2d1 application/vnd.docker.distribution.manifest.list.v2+json sha256:18632ea5cff38cda5b08054057297e527dcfc144a5f195c1c836a0805a9bbad1 4.8 MiB linux/amd64,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:e7f0172bd993412f6bb3e21e3edd2169d0fd92b7bb73242ee35379237625a55d application/vnd.docker.distribution.manifest.list.v2+json sha256:ebe99272d38ff65255c1fba33c17d10f588b612625b19c68fe5aeed0f134fa74 24.7 MiB linux/amd64,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:e81374a898b7f557718b66b4894ac3e03b90b4cb65d958ff3a35b9863f2d6ad6 application/vnd.oci.image.index.v1+json sha256:37b78640822e2563ecab155f691a2eef977472745ea09f6013e0e7f5402d64a9 247.8 MiB linux/amd64,linux/arm64,unknown/unknown io.cri-containerd.image=managed sha256:ee12b694a0f4824f5fca50ffcc95ec8c249245d8bea015944a6ca84a52ac891f application/vnd.docker.distribution.manifest.list.v2+json sha256:58d91a5faaf8f8222f8aa6c0a170826bbabcc60eedab71afd2326548cde84171 21.9 MiB linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:f410b4e37f09ec3e3aef93952fe5d161396c66754a852be2a0bc8a82de17f02b application/vnd.docker.distribution.manifest.list.v2+json sha256:a8b77a5f27b167501465f7f5fb7601c44af4df8dccd1c7201363bbb301d1fe40 83.6 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:f4937dd361b91d8b0cd79a3c0686998c912dfe874ce596de3d91357b19418e5c application/vnd.docker.distribution.manifest.list.v2+json sha256:cf169a0c328a5b4f2dc96b224c3cf6dbc2c8269c6ecafac54bc1de00102b665e 5.4 MiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed sha256:f9c81e42abf4293510c4cfb40720912b248ff343d94268794c74c37b58693e9a application/vnd.docker.distribution.manifest.list.v2+json sha256:c520f71091cd09a9c9628a4e010f6fbc6118da9573af46af5b3f4c3ed8d463dc 34.9 MiB linux/amd64,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed다운로드 및 이미지 확인ctr images pull docker.io/golreas/hello:1.0.0 docker.io/golreas/hello:1.0.0: resolved |++++++++++++++++++++++++++++++++++++++| manifest-sha256:9e8c2be45e8618f075510b98d7e554d599c3ba8ed1f083faedcee243aff8e9c8: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:23e7733194ec9068106448513f45f2ae36e8931263abe26110e82c2db99549ec: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:00b6bc59183634774862a1f5d9fa777966ffdd8b4edd6fe07006671358dfc249: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:148b7926ba2143f7dbd1efaab45bd08b5fde13f01510d1319ee7cd0aa781f8d0: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:0a5428d7ed1bdde6d0638d39b519fcd3307eb60e70ba9f220d1066b39a71de93: done |++++++++++++++++++++++++++++++++++++++| config-sha256:f8812cc66e7be6bd8a78ca25a7701407a6aa40bf06d11ca572f61d63c91944a6: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:7293ae927b976710c33b54ae3957471f36b9e1150408853c3dfbd7baff3f59d1: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:34ef2a75627f6089e01995bfd3b3786509bbdc7cfb4dbc804b642e195340dbc9: done |++++++++++++++++++++++++++++++++++++++| elapsed: 12.7s total: 76.4 M (6.0 MiB/s) unpacking linux/arm64/v8 sha256:9e8c2be45e8618f075510b98d7e554d599c3ba8ed1f083faedcee243aff8e9c8... done: 1.57064216s태그 변경ctr images tag docker.io/golreas/hello:1.0.0 docker.io/golreas/hello:2.0.0 docker.io/golreas/hello:2.0.0 [root@k8s-master ~]# ctr images list REF TYPE DIGEST SIZE PLATFORMS LABELS docker.io/golreas/hello:1.0.0 application/vnd.docker.distribution.manifest.v2+json sha256:9e8c2be45e8618f075510b98d7e554d599c3ba8ed1f083faedcee243aff8e9c8 76.6 MiB linux/arm64 - docker.io/golreas/hello:2.0.0 application/vnd.docker.distribution.manifest.v2+json sha256:9e8c2be45e8618f075510b98d7e554d599c3ba8ed1f083faedcee243aff8e9c8 76.6 MiB linux/arm64 -업로드ctr image push docker.io/golreas/hello:2.0.0 --user golreas Password: manifest-sha256:9e8c2be45e8618f075510b98d7e554d599c3ba8ed1f083faedcee243aff8e9c8: done |++++++++++++++++++++++++++++++++++++++| config-sha256:f8812cc66e7be6bd8a78ca25a7701407a6aa40bf06d11ca572f61d63c91944a6: done |++++++++++++++++++++++++++++++++++++++| elapsed: 3.0 s 이미지 -> 파일로 변환ctr -n default image export file.tar docker.io/golreas/hello:1.0.0 [root@k8s-master ~]# ls anaconda-ks.cfg file.tar k8s-local-volume k8s_env.sh k8s_install.sh monitoring 파일 -> 이미지로 변환ctr -n k8s.io image import file.tar unpacking docker.io/golreas/hello:1.0.0 (sha256:9e8c2be45e8618f075510b98d7e554d599c3ba8ed1f083faedcee243aff8e9c8)...done삭제ctr -n k8s.io image remove docker.io/golreas/hello:1.0.0 docker.io/golreas/hello:1.0.0 [root@k8s-master ~]# ctr -n k8s.io image list | grep hello 같은 이미지를 도커에서 받았을 때와 쿠버네티스에서 받았을 때 사이즈가 다른 이유dockerdocker pull 1pro/api-tester:latest latest: Pulling from 1pro/api-tester 416105dc84fc: Already exists fe66142579ff: Already exists 1250d2aa493e: Already exists 405eaf4f903e: Pull complete 4f4fb700ef54: Pull complete Digest: sha256:189625384d2f2856399f77b6212b6cfc503931e8b325fc1388e23c8a69f3f221 Status: Downloaded newer image for 1pro/api-tester:latest docker.io/1pro/api-tester:latest docker image list REPOSITORY TAG IMAGE ID CREATED SIZE 1pro/api-tester latest 320d6bd226c9 18 months ago 520MB"Architecture": "arm64", "Os": "linux", "Size": 520321200, containerdctr image pull docker.io/1pro/api-tester:latest docker.io/1pro/api-tester:latest: resolved |++++++++++++++++++++++++++++++++++++++| index-sha256:189625384d2f2856399f77b6212b6cfc503931e8b325fc1388e23c8a69f3f221: done |++++++++++++++++++++++++++++++++++++++| manifest-sha256:95802370e0a3407e6e447de4c4ccd2a029e99eeb380b9fbf935a53cc683feed3: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1: done |++++++++++++++++++++++++++++++++++++++| config-sha256:320d6bd226c920f6876939f87cf5d81ea00de92d4e20d226ca73562c1a1a88f6: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:416105dc84fc8cf66df5d2c9f81570a2cc36a6cae58aedd4d58792f041f7a2f5: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:fe66142579ff5bb0bb5cf989222e2bc77a97dcbd0283887dec04d5b9dfd48cfa: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:1250d2aa493e8744c8f6cb528c8a882c14b6d7ff0af6862bbbfe676f60ea979e: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:405eaf4f903eeffb31e40d57d182d052fe390a30a4f401b5ec5b17f093cc61c9: done |++++++++++++++++++++++++++++++++++++++| elapsed: 3.4 s total: 0.0 B (0.0 B/s) unpacking linux/arm64/v8 sha256:189625384d2f2856399f77b6212b6cfc503931e8b325fc1388e23c8a69f3f221... done: 3.718912435s ctr image list REF TYPE DIGEST SIZE PLATFORMS LABELS docker.io/1pro/api-tester:latest application/vnd.oci.image.index.v1+json sha256:189625384d2f2856399f77b6212b6cfc503931e8b325fc1388e23c8a69f3f221 247.8 MiB linux/amd64,linux/arm64,unknown/unknown -linux/amd64,linux/arm64 Container 이미지는 각각의 Layer로 구성돼 있는데, Docker에서 다운 받을 때는 전체 Layer를 받았고, Kubernetes에는 기존 이미지에 이미 존재하는 Layer가 있기 때문에 새로 받은 이미지의 Size가 작게 조회 됐을 것이다. docker -> containerddocker image list REPOSITORY TAG IMAGE ID CREATED SIZE golreas/hello 2.0.0 f8812cc66e7b 56 minutes ago 249MB golreas/api-tester v1.0.0 9438a37e6182 4 hours ago 520MB 1pro/api-tester latest 320d6bd226c9 18 months ago 520MB [root@cicd-server ~]# docker save -o docker-image.tar 1pro/api-tester:latest [root@cicd-server ~]# ls -lh docker-image.tar -rw-------. 1 root root 500M Jun 15 01:16 docker-image.tar [root@cicd-server ~]# scp docker-image.tar root@192.168.56.30:/root The authenticity of host '192.168.56.30 (192.168.56.30)' can't be established. ED25519 key fingerprint is SHA256:db7xQBeDq/ivTK1ymDqPFK0EDxCLVZfszUaoggOADiE. This key is not known by any other names Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '192.168.56.30' (ED25519) to the list of known hosts. root@192.168.56.30's password: docker-image.tar ctr image rm docker.io/1pro/api-tester:latest docker.io/1pro/api-tester:latest [root@k8s-master ~]# ctr image import docker-image.tar unpacking docker.io/1pro/api-tester:latest (sha256:a878b80425d48f695d8b1527fdb41d46c96fbdada66848b4b6919b44faad749d)...done [root@k8s-master ~]# ctr image list REF TYPE DIGEST SIZE PLATFORMS LABELS docker.io/1pro/api-tester:latest application/vnd.docker.distribution.manifest.v2+json sha256:a878b80425d48f695d8b1527fdb41d46c96fbdada66848b4b6919b44faad749d 499.9 MiB linux/arm64 - containerd -> dockerctr image rm docker.io/1pro/api-tester:latest docker.io/1pro/api-tester:latest [root@k8s-master ~]# ctr image import docker-image.tar unpacking docker.io/1pro/api-tester:latest (sha256:a878b80425d48f695d8b1527fdb41d46c96fbdada66848b4b6919b44faad749d)...done [root@k8s-master ~]# ctr image list REF TYPE DIGEST SIZE PLATFORMS LABELS docker.io/1pro/api-tester:latest application/vnd.docker.distribution.manifest.v2+json sha256:a878b80425d48f695d8b1527fdb41d46c96fbdada66848b4b6919b44faad749d 499.9 MiB linux/arm64 - docker.io/golreas/hello:1.0.0 application/vnd.docker.distribution.manifest.v2+json sha256:9e8c2be45e8618f075510b98d7e554d599c3ba8ed1f083faedcee243aff8e9c8 76.6 MiB linux/arm64 - docker.io/golreas/hello:2.0.0 application/vnd.docker.distribution.manifest.v2+json sha256:9e8c2be45e8618f075510b98d7e554d599c3ba8ed1f083faedcee243aff8e9c8 76.6 MiB linux/arm64 - [root@k8s-master ~]# [root@k8s-master ~]# [root@k8s-master ~]# [root@k8s-master ~]# ctr image rm docker.io/1pro/api-tester:latest docker.io/1pro/api-tester:latest [root@k8s-master ~]# ctr image pull docker.io/1pro/api-tester:latest docker.io/1pro/api-tester:latest: resolved |++++++++++++++++++++++++++++++++++++++| index-sha256:189625384d2f2856399f77b6212b6cfc503931e8b325fc1388e23c8a69f3f221: done |++++++++++++++++++++++++++++++++++++++| manifest-sha256:95802370e0a3407e6e447de4c4ccd2a029e99eeb380b9fbf935a53cc683feed3: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1: done |++++++++++++++++++++++++++++++++++++++| config-sha256:320d6bd226c920f6876939f87cf5d81ea00de92d4e20d226ca73562c1a1a88f6: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:1250d2aa493e8744c8f6cb528c8a882c14b6d7ff0af6862bbbfe676f60ea979e: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:fe66142579ff5bb0bb5cf989222e2bc77a97dcbd0283887dec04d5b9dfd48cfa: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:405eaf4f903eeffb31e40d57d182d052fe390a30a4f401b5ec5b17f093cc61c9: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:416105dc84fc8cf66df5d2c9f81570a2cc36a6cae58aedd4d58792f041f7a2f5: done |++++++++++++++++++++++++++++++++++++++| elapsed: 3.4 s total: 0.0 B (0.0 B/s) unpacking linux/arm64/v8 sha256:189625384d2f2856399f77b6212b6cfc503931e8b325fc1388e23c8a69f3f221... done: 2.824286002s [root@k8s-master ~]# ctr image list REF TYPE DIGEST SIZE PLATFORMS LABELS docker.io/1pro/api-tester:latest application/vnd.oci.image.index.v1+json sha256:189625384d2f2856399f77b6212b6cfc503931e8b325fc1388e23c8a69f3f221 247.8 MiB linux/amd64,linux/arm64,unknown/unknown - docker.io/golreas/hello:1.0.0 application/vnd.docker.distribution.manifest.v2+json sha256:9e8c2be45e8618f075510b98d7e554d599c3ba8ed1f083faedcee243aff8e9c8 76.6 MiB linux/arm64 - docker.io/golreas/hello:2.0.0 application/vnd.docker.distribution.manifest.v2+json sha256:9e8c2be45e8618f075510b98d7e554d599c3ba8ed1f083faedcee243aff8e9c8 76.6 MiB linux/arm64 - [root@k8s-master ~]# ctr image export containerd-image.tar docker.io/1pro/api-tester:latest [root@k8s-master ~]# ls -lh containerd-image.tar -rw-r--r--. 1 root root 248M Jun 9 16:14 containerd-image.tar [root@k8s-master ~]# [root@k8s-master ~]# [root@k8s-master ~]# scp containerd-image.tar root@192.168.56.20:/root The authenticity of host '192.168.56.20 (192.168.56.20)' can't be established. ED25519 key fingerprint is SHA256:opQ7AT2hiB2U1FYJZyW8u3i8xsCqE91vlg6tWJRWqw0. This key is not known by any other names Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '192.168.56.20' (ED25519) to the list of known hosts. root@192.168.56.20's password: containerd-image.tar docker image rm 1pro/api-tester:latest Untagged: 1pro/api-tester:latest Untagged: 1pro/api-tester@sha256:189625384d2f2856399f77b6212b6cfc503931e8b325fc1388e23c8a69f3f221 Deleted: sha256:320d6bd226c920f6876939f87cf5d81ea00de92d4e20d226ca73562c1a1a88f6 Deleted: sha256:71aa8f0ba35ade0fb46725ca4f2bf964f96633622a57ad64a8c0f88475afa93a Deleted: sha256:7a611a94f41e2a6d2f0fe927f361028ae762a361f6df0c099dcfc31f1e8c168a [root@cicd-server ~]# docker load -i containerd-image.tar 34456869abea: Loading layer [==================================================>] 17.15MB/17.15MB 5f70bf18a086: Loading layer [==================================================>] 32B/32B Loaded image: 1pro/api-tester:latest [root@cicd-server ~]# docker image list REPOSITORY TAG IMAGE ID CREATED SIZE golreas/hello 2.0.0 f8812cc66e7b About an hour ago 249MB golreas/api-tester v1.0.0 9438a37e6182 4 hours ago 520MB 1pro/api-tester latest 320d6bd226c9 18 months ago 520MB
데브옵스 · 인프라
・
docker
・
ctr
・
build
・
devops
2025. 06. 15.
1
[SPG-박상준] 3주차 발자국
DevOps 환경 구축의 핵심 도구들을 깊이 있게 학습했습니다. 특히 Jenkins Pipeline을 시작으로 Blue/Green 배포 전략, 그리고 Helm과 Kustomize 비교 실습까지 포괄적인 데브옵스 생태계를 경험할 수 있었습니다.모든 미션을 통해 각 도구의 특성과 적용 시나리오를 체계적으로 정리할 수 있었고, 실무에서 활용할 수 있는 실질적인 지식을 쌓았습니다. Jenkins, Helm & Kustomize, Docker & Containerd사용을 안했던 기술도 있고 이미 익숙한 도구도 있었서 빠른 복습이 가능했고, 덕분에 새로운 기술들에 더 많은 시간을 투자할 수 있었습니다. Docker 명령어들은 좋은 복습 기회가 되었고, Containerd는 처음 사용해보는 도구여서 단계별로 따라가며 새로운 경험을 쌓을 수 있었습니다.실습 과정에서 메타데이터로 인한 이미지 크기 차이를 직접 확인하며, 세부적인 부분까지 주의 깊게 살펴봐야 한다는 점을 체감했습니다. 이런 작은 디테일들이 실제 운영 환경에서는 큰 차이를 만들 수 있다는 것을 깨달았습니다.마지막 주차까지 최선을 다해 완주하고 이 과정을 통해 한 단계 더 성장한 엔지니어가 되기를 기대합니다.
데브옵스 · 인프라
・
devops
・
3주차
・
배포
2025. 06. 08.
1
[인프런 워밍업 클럽 4기 - DevOps] 미션 4
PV, PVC1번 API 파일 생성[root@k8s-master ~] curl http://192.168.56.30:31231/create-file-pod hrgosbpjox.txt [root@k8s-master ~] curl http://192.168.56.30:31231/create-file-pv xprrmxjsna.txt2번 Container 임시 폴더 확인[root@k8s-master ~] k exec -n anotherclass-123 -it api-tester-1231-66c489cbb8-5krzn -- ls /usr/src/myapp/tmp hrgosbpjox.txt [root@k8s-master ~] k exec -n anotherclass-123 -it api-tester-1231-66c489cbb8-5krzn -- ls /usr/src/myapp/files/dev xprrmxjsna.txt [root@k8s-master ~] ls /root/k8s-local-volume/1231/ xprrmxjsna.txt2번 master node 폴더 확인[root@k8s-master ~] ls /root/k8s-local-volume/1231/ xprrmxjsna.txtpod 삭제[root@k8s-master ~] k delete -n anotherclass-123 pod api-tester-1231-66c489cbb8-5krzn pod "api-tester-1231-66c489cbb8-5krzn" deleted4번 API - 파일 조회[root@k8s-master ~] k exec -n anotherclass-123 -it api-tester-1231-66c489cbb8-9kpxd -- curl localhost:8080/list-file-pod [root@k8s-master ~] k exec -n anotherclass-123 -it api-tester-1231-66c489cbb8-9kpxd -- curl localhost:8080/list-file-pv xafgvinxxz.txt thschtbuoo.txt pablwvxlec.txt pogdzjqhqs.txt xprrmxjsna.txt 5. hostPath 동작 확인 - Deployment 수정 후 [1~4] 실행apiVersion: apps/v1 kind: Deployment metadata: namespace: anotherclass-123 name: api-tester-1231 spec: template: spec: nodeSelector: kubernetes.io/hostname: k8s-master containers: - name: api-tester-1231 volumeMounts: - name: files mountPath: /usr/src/myapp/files/dev - name: secret-datasource mountPath: /usr/src/myapp/datasource volumes: - name: files persistentVolumeClaim: // 삭제 claimName: api-tester-1231-files // 삭제 // 아래 hostPath 추가 hostPath: path: /root/k8s-local-volume/1231 - name: secret-datasource secret: secretName: api-tester-1231-postgresql[1~4 반복]API - 파일 생성 [root@k8s-master ~] k exec -n anotherclass-123 -it api-tester-1231-66c489cbb8-9kpxd -- curl localhost:8080/list-file-pod [root@k8s-master ~] k exec -n anotherclass-123 -it api-tester-1231-66c489cbb8-9kpxd -- curl localhost:8080/list-file-pv xafgvinxxz.txt thschtbuoo.txt pablwvxlec.txt pogdzjqhqs.txt xprrmxjsna.txt Container 임시 폴더 확인 root@k8s-master ~] k exec -n anotherclass-123 -it api-tester-1231-6cf7495fcd-7gtwq -- ls /usr/src/myapp/tmp yrwjythrtf.txt Container 영구저장 폴더 확인 [root@k8s-master ~] k exec -n anotherclass-123 -it api-tester-1231-6cf7495fcd-7gtwq -- ls /usr/src/myapp/files/dev lbjgryyqvl.txt pogdzjqhqs.txt xafgvinxxz.txt pablwvxlec.txt thschtbuoo.txt xprrmxjsna.txt Master node 폴더 확인 [root@k8s-master ~] ls /root/k8s-local-volume/1231 lbjgryyqvl.txt pablwvxlec.txt pogdzjqhqs.txt thschtbuoo.txt xafgvinxxz.txt xprrmxjsna.txt API - 파일 조회 root@k8s-master ~] k exec -n anotherclass-123 -it api-tester-1231-6cf7495fcd-7gtwq -- curl localhost:8080/list-file-pod yrwjythrtf.txt [root@k8s-master ~] k exec -n anotherclass-123 -it api-tester-1231-6cf7495fcd-7gtwq -- curl localhost:8080/list-file-pv lbjgryyqvl.txt xafgvinxxz.txt thschtbuoo.txt pablwvxlec.txt pogdzjqhqs.txt xprrmxjsna.txt RollingUpdate 하기HPA minReplica 2로 바꾸기 [root@k8s-master ~] k patch -n anotherclass-123 hpa api-tester-1231-default -p '{"spec":{"minReplicas":2}}' horizontalpodautoscaler.autoscaling/api-tester-1231-default patched 그외 Deployment scale 명령 [root@k8s-master ~] k scale -n anotherclass-123 deployment api-tester-1231 --replicas=2 deployment.apps/api-tester-1231 scaled edit로 모드로 직접 수정 kubectl edit -n anotherclass-123 deployment api-tester-1231 지속적으로 Version호출 하기 while : do curl http://192.168.56.30:31231/version sleep 2 echo '' done 3) 별도의 원격 콘솔창을 열어서 업데이트 실행 kubectl set image -n anotherclass-123 deployment/api-tester-1231 api-tester-1231=1pro/api-tester:v2.0.0 kubectl set image -n anotherclass-123 deployment/api-tester-1231RollingUpdate (maxUnavailable: 0%, maxSurge: 100%) 하기strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 0% maxSurge: 100% kubectl set image -n anotherclass-123 deployment/api-tester-1231 api-tester-1231=1pro/api-tester:v1.0.0 Recreate 하기 strategy: type: Recreate [root@k8s-master ~] kubectl set image -n anotherclass-123 deployment/api-tester-1231 api-tester-1231=1pro/api-tester:v2.0.0 deployment.apps/api-tester-1231 image updated Rollback[root@k8s-master ~] kubectl rollout undo -n anotherclass-123 deployment/api-tester-1231 deployment.apps/api-tester-1231 rolled back ServicePod 내부에서 Service 명으로 API 호출[root@k8s-master ~] k exec -n anotherclass-123 -it api-tester-1231-6cf7495fcd-88cqq -- curl http://api-tester-1231:80/version [App Version] : Api Tester v1.0.0 Deployment에서 Pod의 ports 전체 삭제, Service targetPort를 http -> 8080으로 수정apiVersion: apps/v1 kind: Deployment metadata: namespace: anotherclass-123 name: api-tester-1231 spec: template: spec: nodeSelector: kubernetes.io/hostname: k8s-master containers: - name: api-tester-1231 ports: // 삭제 - name: http // 삭제 containerPort: 8080 // 삭제 --- apiVersion: v1 kind: Service metadata: namespace: anotherclass-123 name: api-tester-1231 spec: ports: - port: 80 targetPort: http -> 8080 // 변경 nodePort: 31231 type: NodePort [root@k8s-master ~] k exec -n anotherclass-123 -it api-tester-1231-6cf7495fcd-88cqq -- curl http://api-tester-1231:80/version [App Version] : Api Tester v1.0.0 HPA[root@k8s-master ~] k top -n anotherclass-123 pods NAME CPU(cores) MEMORY(bytes) api-tester-1231-6cf7495fcd-88cqq 1m 140Mi [root@k8s-master ~] k top -n anotherclass-123 pods NAME CPU(cores) MEMORY(bytes) api-tester-1231-6cf7495fcd-88cqq 104m 161Mi [root@k8s-master ~]# k get hpa -n anotherclass-123 NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE api-tester-1231-default Deployment/api-tester-1231 56%/60% 2 4 2 3d8h [behavior] 미사용으로 적용spec: # behavior: # scaleDown: # policies: # - periodSeconds: 15 # type: Percent # value: 100 # selectPolicy: Max # scaleUp: # policies: # - periodSeconds: 15 # type: Pods # value: 4 # - periodSeconds: 15 # type: Percent # value: 100 # selectPolicy: Max # stabilizationWindowSeconds: 120 [root@k8s-master ~] kubectl edit -n anotherclass-123 hpa api-tester-1231-default horizontalpodautoscaler.autoscaling/api-tester-1231-default edited부하 발생 API[root@k8s-master ~] k exec -n anotherclass-123 -it api-tester-1231-6cf7495fcd-88cqq -- curl http://192.168.56.30:31231/cpu-load[root@k8s-master ~] k get hpa -n anotherclass-123 NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE api-tester-1231-default Deployment/api-tester-1231 101%/60% 2 4 4 3d8h [root@k8s-master ~] k top -n anotherclass-123 pods NAME CPU(cores) MEMORY(bytes) api-tester-1231-6cf7495fcd-5b6bw 202m 157Mi api-tester-1231-6cf7495fcd-88cqq 1m 140Mi api-tester-1231-6cf7495fcd-br5d2 1m 127Mi api-tester-1231-6cf7495fcd-wzx9k 1m 127Mi
데브옵스 · 인프라
・
devops
・
k8s
・
hpa
・
pv
・
pvc
・
service
・
rolling
2025. 06. 06.
1
[SPG-박상준] 2주차 발자국
[미션 3]까지 수행하면서 느낀 점은 "이해하기 쉽지 않다"로 정리할 수 있다.어제 중간 OT에서 일프로님이 수강평 1점짜리 내용을 보여주셨는데, 그 부분에 있어서 공감이 가는 측면도 있다. 수강생들의 배움의 시작점이 제각각 다르다 보니 모든 이를 만족시킬 수는 없다는 점에서 이해가 되며, 강사 입장에서는 이미 최대한 쉽게 설명하려고 노력하고 계시는 것 같다. 나 또한 솔라리스 강의를 진행했던 강사로서, 어떻게 하면 더 쉽게 비유하고 설명할 수 있을까 하는 고민을 늘 해왔다.하지만 직접 강의를 기획하고 촬영해본 경험이 없는 입장에서 섣불리 판단하기보다는, 일프로님께 많은 응원과 격려를 보내고 싶다. 새로운 분야를 접할 때는 나만의 방식으로 이해하기 위해 반복적으로 보고 듣는 과정이 무엇보다 중요하다. 그러한 과정을 거치다 보면 어느 순간 자연스럽게 스며들어 나만의 지식으로 체화되는 순간이 온다.
데브옵스 · 인프라
・
SPG
・
인프런
・
k8s
・
쿠버네티스
・
devops
・
회고
2025. 06. 06.
1
[인프런 워밍업 클럽 4기 - DevOps] 미션 3. Configmap, Secret
▶ 응용1 : Configmap의 환경변수들을 Secret을 사용해서 작성하고, App에서는 같은 결과가 나오도록 확인해 보세요.☞ Secret을 이렇게 사용하는 경우는 별로 보지 못했습니다. 여러가지 방법으로 Secret을 만들어본다는데 의의를 두시면 됩니다. Secret 생성 (1) - Dashboard확인을 위해 Dashboard에서 Secret 선택CMD 확인Deployment에서 envFrom 수정▶ 응용2 : 반대로 Secret의 DB정보를 Configmap으로 만들어보고 App을 동작시켜 보세요☞ Configmap을 Volume에 연결해서 쓰는 케이스는 매우 많습니다. apiVersion: v1 kind: ConfigMap metadata: namespace: anotherclass-123 name: api-tester-1231-postgresql labels: part-of: k8s-anotherclass component: backend-server name: api-tester instance: api-tester-1231 version: 1.0.0 managed-by: dashboard data: postgresql-info.yaml: | driver-class-name: "org.postgresql.Driver" url: "jdbc:postgresql://postgresql:5431" username: "dev" password: "dev123" volumes: - name: files persistentVolumeClaim: claimName: api-tester-1231-files - name: configmap-datasource configMap: name: api-tester-1231-postgresql defaultMode: 420 ... volumeMounts: - name: files mountPath: /usr/src/myapp/files/dev - name: configmap-datasource mountPath: /usr/src/myapp/datasource/dev
데브옵스 · 인프라
・
k8s
・
인프런
・
devops
・
sre
・
secret
・
configmap
・
쿠버네티스
2025. 06. 03.
1
[인프런 워밍업 클럽 4기 - DevOps] 미션 2. Probe 응용과제
응용 과제Application 로그를 통한 probe 동작 분석#사전 작업#kubectl patch -n anotherclass-123 hpa api-tester-1231-default -p '{"spec":{"minReplicas":1}}' Grafana 접속 후 Pod 로그 화면 설정Pod 삭제Application Log 확인마스터 노드에서 실행// 1번 API - 외부 API 실패 curl http://192.168.56.30:31231/hello // 2번 API // 외부 API 실패 curl http://192.168.56.30:31231/hello // 내부 API 성공 kubectl exec -n anotherclass-123 -it api-tester-1231-7459cd7df-2hdhk -- curl localhost:8080/hello kubectl exec -n anotherclass-123 -it -- curl localhost:8080/hello // 3번 API - 외부 API 성공 curl http://192.168.56.30:31231/hello // 4번 API // 트래픽 중단 - (App 내부 isAppReady를 False로 바꿈) curl http://192.168.56.30:31231/traffic-off // 외부 API 실패 curl http://192.168.56.30:31231/hello // 트래픽 재개 - (App 내부 isAppReady를 True로 바꿈) kubectl exec -n anotherclass-123 -it api-tester-1231-7459cd7df-2hdhk -- curl localhost:8080/traffic-on // 5번 API - 장애발생 (App 내부 isAppLive를 False로 바꿈) curl http://192.168.56.30:31231/server-error 응용 1. startupProbe가 실패 되도록 설정해서 Pod가 무한 재기동 상태가 되도록 설정해 보세요.[결과]응용 2. 일시적 장애 상황(App 내부 부하 증가)가 시작 된 후, 30초 뒤에 트래픽이 중단되고, 3분 뒤에는 App이 재기동 되도록 설정해 보세요.설정 후 API 요청하면 성공하지만 그 이후에는 차단응용 3. Secret 파일(/usr/src/myapp/datasource/postgresql-info.yaml)이 존재하는지 체크하는 readinessProbe를 만들어 보세요.해당 Pod 내부에 파일이 존재하기 때문에 실패 로그는 확인 불가
데브옵스 · 인프라
・
DevOps
・
SRE
・
일프로
・
Probe
・
k8s
・
msa
2025. 05. 30.
1
[SPG-박상준] 1주차 발자국
26일 OT 이후 [미션1] 쿠버네티스 설치 구간별 상태 확인만 진행된 상태이지만 발자국을 남겨본다. 인프런에서 강의 구매 후 절반 정도 진행을 하고 있었고 스터디 팀을 하는 것이 귀찮기도 했지만 전원 완주 팀에 Sprint4 무료 쿠폰이라는 달콤한 보상이 있었기에 신청하게 되었다.우리 SPG(Senior Pod Group) 팀은 대부분 고연차 경력을 가지고 현업에 계신 분들로 구성이 되어 있기 때문에 개인 공부를 위한 실습은 하겠지만 문서화하고 기록에 남기는 일은 정말 쉽지 않은 것을 알고 있다. (나 또한..) 각설하고 [미션1] 과정은 한차례 진행했었기 때문에 신규로 다시 구성을 했었고 쿠버네티스 설치에 필요한 환경설정은 여러 번 재구축 해보기 위해 스크립트로 작성하여 진행했다. 환경설정 또한 이전 단계 설정을 하고 검증 단계를 걸쳐서 완료되지 않으면 넘어가지 않도록 작성해서 설정이 빠지는 것을 방지했다.[환경설정 코드]#!/bin/bash # 색상 정의 RED='\033[0;31m' GREEN='\033[0;32m' YELLOW='\033[1;33m' BLUE='\033[0;34m' NC='\033[0m' # No Color # 로그 함수 log_info() { echo -e "${BLUE}[INFO]${NC} $1" } log_success() { echo -e "${GREEN}[SUCCESS]${NC} $1" } log_warning() { echo -e "${YELLOW}[WARNING]${NC} $1" } log_error() { echo -e "${RED}[ERROR]${NC} $1" } # 검증 실패 시 종료 함수 exit_on_error() { log_error "$1" log_error "스크립트를 종료합니다." exit 1 } # 사용자 확인 함수 confirm_continue() { read -p "다음 단계로 진행하시겠습니까? (y/n): " -n 1 -r echo if [[ ! $REPLY =~ ^[Yy]$ ]]; then log_info "사용자가 중단했습니다." exit 0 fi } echo "========================================" echo "Kubernetes 기본 환경설정 스크립트 시작" echo "========================================" # ============================================ # 1. 시스템 기본 설정 # ============================================ log_info "======== [1단계] 시스템 기본 설정 ========" log_info "타임존을 Asia/Seoul로 설정 중..." timedatectl set-timezone Asia/Seoul timedatectl set-ntp true chronyc makestep log_info "필수 패키지 설치 중..." yum install -y yum-utils iproute-tc yum update openssl openssh-server -y log_info "hosts 파일 설정 중..." if ! grep -q "k8s-master" /etc/hosts; then cat > /etc/hosts 192.168.56.30 k8s-master EOF fi log_info "======== [1단계] 검증 시작 ========" # 타임존 검증 current_timezone=$(timedatectl show --property=Timezone --value) if [ "$current_timezone" = "Asia/Seoul" ]; then log_success "타임존 설정 완료: $current_timezone" else exit_on_error "타임존 설정 실패: $current_timezone" fi # NTP 동기화 검증 ntp_status=$(timedatectl show --property=NTPSynchronized --value) if [ "$ntp_status" = "yes" ]; then log_success "NTP 동기화 완료" else log_warning "NTP 동기화 미완료 (시간이 필요할 수 있음)" fi # 필수 패키지 검증 if command -v tc >/dev/null 2>&1; then log_success "tc 패키지 설치 완료" else exit_on_error "tc 패키지 설치 실패" fi # hosts 파일 검증 if grep -q "192.168.56.30 k8s-master" /etc/hosts; then log_success "hosts 파일 설정 완료" else exit_on_error "hosts 파일 설정 실패" fi log_success "======== [1단계] 시스템 기본 설정 완료 ========" confirm_continue # ============================================ # 2. 보안 설정 # ============================================ log_info "======== [2단계] 보안 설정 ========" log_info "방화벽 해제 중..." systemctl stop firewalld systemctl disable firewalld log_info "SELinux 설정 중..." setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config log_info "======== [2단계] 검증 시작 ========" # 방화벽 상태 검증 firewall_status=$(systemctl is-active firewalld) if [ "$firewall_status" = "inactive" ]; then log_success "방화벽 해제 완료" else exit_on_error "방화벽 해제 실패: $firewall_status" fi # 방화벽 부팅 시 비활성화 검증 firewall_enabled=$(systemctl is-enabled firewalld) if [ "$firewall_enabled" = "disabled" ]; then log_success "방화벽 부팅시 비활성화 완료" else exit_on_error "방화벽 부팅시 비활성화 실패: $firewall_enabled" fi # SELinux 현재 상태 검증 selinux_current=$(getenforce) if [ "$selinux_current" = "Permissive" ]; then log_success "SELinux 현재 상태: $selinux_current" else exit_on_error "SELinux 설정 실패: $selinux_current" fi # SELinux 설정 파일 검증 selinux_config=$(grep "^SELINUX=" /etc/selinux/config | cut -d'=' -f2) if [ "$selinux_config" = "permissive" ]; then log_success "SELinux 영구 설정 완료: $selinux_config" else exit_on_error "SELinux 영구 설정 실패: $selinux_config" fi log_success "======== [2단계] 보안 설정 완료 ========" confirm_continue # ============================================ # 3. 시스템 리소스 설정 # ============================================ log_info "======== [3단계] 시스템 리소스 설정 ========" log_info "Swap 비활성화 중..." swapoff -a sed -i '/ swap / s/^/#/' /etc/fstab log_info "======== [3단계] 검증 시작 ========" # Swap 상태 검증 swap_status=$(swapon --show) if [ -z "$swap_status" ]; then log_success "Swap 비활성화 완료" else exit_on_error "Swap 비활성화 실패: $swap_status" fi # fstab에서 swap 주석 처리 검증 swap_in_fstab=$(grep -v "^#" /etc/fstab | grep swap) if [ -z "$swap_in_fstab" ]; then log_success "fstab에서 swap 영구 비활성화 완료" else exit_on_error "fstab에서 swap 영구 비활성화 실패" fi log_success "======== [3단계] 시스템 리소스 설정 완료 ========" confirm_continue # ============================================ # 4. 네트워크 설정 # ============================================ log_info "======== [4단계] 네트워크 설정 ========" log_info "커널 모듈 설정 중..." cat [설치 코드]#!/bin/bash # 색상 정의 RED='\033[0;31m' GREEN='\033[0;32m' YELLOW='\033[1;33m' BLUE='\033[0;34m' PURPLE='\033[0;35m' NC='\033[0m' # No Color # 로그 함수 log_info() { echo -e "${BLUE}[INFO]${NC} $1" } log_success() { echo -e "${GREEN}[SUCCESS]${NC} $1" } log_warning() { echo -e "${YELLOW}[WARNING]${NC} $1" } log_error() { echo -e "${RED}[ERROR]${NC} $1" } log_step() { echo -e "${PURPLE}[STEP]${NC} $1" } # 검증 실패 시 종료 함수 exit_on_error() { log_error "$1" log_error "스크립트를 종료합니다." exit 1 } # 사용자 확인 함수 confirm_continue() { read -p "다음 단계로 진행하시겠습니까? (y/n): " -n 1 -r echo if [[ ! $REPLY =~ ^[Yy]$ ]]; then log_info "사용자가 중단했습니다." exit 0 fi } # 대기 시간 함수 wait_for_service() { local service_name=$1 local max_wait=$2 local count=0 log_info "${service_name} 서비스 시작 대기 중..." while [ $count -lt $max_wait ]; do if systemctl is-active --quiet $service_name; then log_success "${service_name} 서비스가 정상적으로 시작되었습니다." return 0 fi sleep 2 count=$((count + 1)) echo -n "." done echo return 1 } echo "================================================" echo "Kubernetes 컨테이너 런타임 및 클러스터 설치 시작" echo "================================================" # ============================================ # 1. 컨테이너 런타임 설치 (containerd) # ============================================ log_step "======== [1단계] 컨테이너 런타임 설치 ========" log_info "Docker 저장소 추가 중..." yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo log_info "containerd 설치 중..." yum install -y containerd.io-1.6.21-3.1.el9.aarch64 log_info "systemd 데몬 재로드 중..." systemctl daemon-reload log_info "containerd 서비스 활성화 및 시작 중..." systemctl enable containerd systemctl start containerd # containerd 서비스 시작 대기 if ! wait_for_service "containerd" 15; then exit_on_error "containerd 서비스 시작 실패" fi log_info "containerd 기본 설정 생성 중..." containerd config default > /etc/containerd/config.toml log_info "SystemdCgroup 활성화 중..." sed -i 's/ SystemdCgroup = false/ SystemdCgroup = true/' /etc/containerd/config.toml log_info "containerd 서비스 재시작 중..." systemctl restart containerd # containerd 재시작 후 대기 if ! wait_for_service "containerd" 15; then exit_on_error "containerd 서비스 재시작 실패" fi log_info "======== [1단계] 검증 시작 ========" # containerd 서비스 상태 검증 containerd_status=$(systemctl is-active containerd) if [ "$containerd_status" = "active" ]; then log_success "containerd 서비스 실행 중" else exit_on_error "containerd 서비스 실행 실패: $containerd_status" fi # containerd 부팅 시 활성화 검증 containerd_enabled=$(systemctl is-enabled containerd) if [ "$containerd_enabled" = "enabled" ]; then log_success "containerd 부팅시 활성화 완료" else exit_on_error "containerd 부팅시 활성화 실패: $containerd_enabled" fi # SystemdCgroup 설정 검증 if grep -q "SystemdCgroup = true" /etc/containerd/config.toml; then log_success "SystemdCgroup 설정 완료" else exit_on_error "SystemdCgroup 설정 실패" fi # containerd 버전 확인 containerd_version=$(containerd --version 2>/dev/null | cut -d' ' -f3) if [ -n "$containerd_version" ]; then log_success "containerd 버전: $containerd_version" else exit_on_error "containerd 버전 확인 실패" fi # containerd 실행 테스트 if ctr version >/dev/null 2>&1; then log_success "containerd 실행 테스트 성공" else exit_on_error "containerd 실행 테스트 실패" fi log_success "======== [1단계] 컨테이너 런타임 설치 완료 ========" confirm_continue # ============================================ # 2. Kubernetes 패키지 설치 # ============================================ log_step "======== [2단계] Kubernetes 패키지 설치 ========" log_info "Kubernetes 저장소 추가 중..." cat /dev/null 2>&1; then kubelet_version=$(kubelet --version | cut -d' ' -f2) log_success "kubelet 설치 완료 - 버전: $kubelet_version" else exit_on_error "kubelet 설치 실패" fi if command -v kubeadm >/dev/null 2>&1; then kubeadm_version=$(kubeadm version -o short) log_success "kubeadm 설치 완료 - 버전: $kubeadm_version" else exit_on_error "kubeadm 설치 실패" fi if command -v kubectl >/dev/null 2>&1; then kubectl_version=$(kubectl version --client -o yaml 2>/dev/null | grep gitVersion | cut -d'"' -f4) log_success "kubectl 설치 완료 - 버전: $kubectl_version" else exit_on_error "kubectl 설치 실패" fi # 버전 일관성 검증 if [[ "$kubelet_version" == *"1.27.2"* ]] && [[ "$kubeadm_version" == *"1.27.2"* ]] && [[ "$kubectl_version" == *"1.27.2"* ]]; then log_success "모든 Kubernetes 패키지 버전 일관성 확인 (v1.27.2)" else exit_on_error "Kubernetes 패키지 버전 불일치" fi log_success "======== [2단계] Kubernetes 패키지 설치 완료 ========" confirm_continue # ============================================ # 3. 클러스터 초기화 # ============================================ log_step "======== [3단계] 클러스터 초기화 ========" log_info "클러스터 초기화 설정:" log_info "- Pod 네트워크 CIDR: 20.96.0.0/16" log_info "- API 서버 주소: 192.168.56.30" log_info "- Kubernetes 버전: v1.27.2" log_warning "클러스터 초기화는 시간이 소요될 수 있습니다 (약 2-5분)..." log_info "kubeadm으로 클러스터 초기화 중..." kubeadm init \ --pod-network-cidr=20.96.0.0/16 \ --apiserver-advertise-address=192.168.56.30 \ --kubernetes-version=v1.27.2 if [ $? -ne 0 ]; then exit_on_error "클러스터 초기화 실패" fi log_success "클러스터 초기화 완료!" log_info "kubectl 사용을 위한 설정 중..." mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config log_info "======== [3단계] 검증 시작 ========" # kubectl 설정 파일 검증 if [ -f "$HOME/.kube/config" ]; then log_success "kubectl 설정 파일 생성 완료" else exit_on_error "kubectl 설정 파일 생성 실패" fi # 클러스터 연결 테스트 log_info "클러스터 연결 테스트 중..." if kubectl cluster-info >/dev/null 2>&1; then log_success "클러스터 연결 테스트 성공" else exit_on_error "클러스터 연결 테스트 실패" fi # 노드 상태 확인 log_info "노드 상태 확인 중..." node_status=$(kubectl get nodes --no-headers | awk '{print $2}') if [[ "$node_status" == *"Ready"* ]] || [[ "$node_status" == *"NotReady"* ]]; then log_success "마스터 노드 등록 완료 (상태: $node_status)" if [[ "$node_status" == *"NotReady"* ]]; then log_warning "노드가 NotReady 상태입니다. Pod 네트워크 설치가 필요합니다." fi else exit_on_error "노드 상태 확인 실패" fi log_success "======== [3단계] 클러스터 초기화 완료 ========" confirm_continue # ============================================ # 4. Pod 네트워크 설치 (Calico) # ============================================ log_step "======== [4단계] Pod 네트워크 설치 (Calico) ========" log_info "Calico Pod 네트워크 설치 중..." log_info "- Calico는 Pod 간 네트워크 통신을 가능하게 합니다" log_info "- 설치 후 노드가 Ready 상태가 됩니다" kubectl create -f https://raw.githubusercontent.com/k8s-1pro/install/main/ground/k8s-1.27/calico-3.26.4/calico.yaml if [ $? -ne 0 ]; then exit_on_error "Calico 기본 설치 실패" fi kubectl create -f https://raw.githubusercontent.com/k8s-1pro/install/main/ground/k8s-1.27/calico-3.26.4/calico-custom.yaml if [ $? -ne 0 ]; then exit_on_error "Calico 커스텀 설정 실패" fi log_success "Calico Pod 네트워크 설치 완료" log_info "마스터 노드에서 Pod 실행 허용 설정 중..." kubectl taint nodes k8s-master node-role.kubernetes.io/control-plane- if [ $? -ne 0 ]; then log_warning "마스터 노드 taint 제거 실패 (이미 제거되었을 수 있음)" else log_success "마스터 노드에서 Pod 실행 허용 설정 완료" fi log_info "======== [4단계] 검증 시작 ========" log_info "Pod 네트워크 구성 요소 확인 중..." log_info "Calico Pod들이 Running 상태가 될 때까지 대기합니다 (최대 3분)..." # Calico Pod 상태 확인 (최대 3분 대기) for i in {1..18}; do calico_pods=$(kubectl get pods -n kube-system -l k8s-app=calico-node --no-headers 2>/dev/null | grep -c "Running") if [ "$calico_pods" -gt 0 ]; then log_success "Calico Pod 실행 중 ($calico_pods개)" break fi if [ $i -eq 18 ]; then log_warning "Calico Pod 상태 확인 시간 초과" fi echo -n "." sleep 10 done # 노드 상태 재확인 (Pod 네트워크 설치 후) log_info "노드 Ready 상태 확인 중..." for i in {1..12}; do node_status=$(kubectl get nodes --no-headers | awk '{print $2}') if [[ "$node_status" == *"Ready"* ]] && [[ "$node_status" != *"NotReady"* ]]; then log_success "노드가 Ready 상태입니다!" break fi if [ $i -eq 12 ]; then log_warning "노드가 아직 NotReady 상태입니다. 시간이 더 필요할 수 있습니다." fi echo -n "." sleep 10 done log_success "======== [4단계] Pod 네트워크 설치 완료 ========" # ============================================ # 5. 편의 기능 설치 # ============================================ log_step "======== [5단계] 편의 기능 설치 ========" log_info "kubectl 자동완성 기능 설치 중..." yum -y install bash-completion log_info "kubectl 별칭 및 자동완성 설정 중..." echo "source > ~/.bashrc echo 'alias k=kubectl' >> ~/.bashrc echo 'complete -o default -F __start_kubectl k' >> ~/.bashrc log_info "Kubernetes Dashboard 설치 중..." kubectl create -f https://raw.githubusercontent.com/k8s-1pro/install/main/ground/k8s-1.27/dashboard-2.7.0/dashboard.yaml log_info "Metrics Server 설치 중..." kubectl create -f https://raw.githubusercontent.com/k8s-1pro/install/main/ground/k8s-1.27/metrics-server-0.6.3/metrics-server.yaml log_success "======== [5단계] 편의 기능 설치 완료 ========" # ============================================ # 최종 검증 및 완료 # ============================================ log_step "======== 최종 클러스터 상태 검증 ========" log_info "전체 클러스터 상태 요약:" echo "================================================" # 클러스터 정보 log_info "클러스터 정보:" kubectl cluster-info --request-timeout=10s 2>/dev/null || log_warning "클러스터 정보 조회 실패" echo "----------------------------------------" # 노드 상태 log_info "노드 상태:" kubectl get nodes -o wide 2>/dev/null || log_warning "노드 상태 조회 실패" echo "----------------------------------------" # 시스템 Pod 상태 log_info "시스템 Pod 상태:" kubectl get pods -n kube-system 2>/dev/null || log_warning "시스템 Pod 상태 조회 실패" echo "----------------------------------------" # 서비스 상태 요약 log_info "핵심 서비스 상태:" echo "- containerd: $(systemctl is-active containerd)" echo "- kubelet: $(systemctl is-active kubelet)" echo "================================================" log_success "========================================" log_success "Kubernetes 클러스터 설치가 완료되었습니다!" log_success "========================================" log_info "다음 명령어로 클러스터 상태를 확인할 수 있습니다:" echo " kubectl get nodes" echo " kubectl get pods -A" echo " kubectl cluster-info" log_info "kubectl 자동완성을 사용하려면 새 터미널을 열거나 다음 명령어를 실행하세요:" echo " source ~/.bashrc" log_info "이제 Kubernetes 클러스터를 사용할 준비가 완료되었습니다!" # join 명령어 추출 및 출력 log_info "======== Worker 노드 추가 정보 ========" log_info "Worker 노드를 추가하려면 다음 명령어를 사용하세요:" if [ -f /tmp/kubeadm-join-command ]; then cat /tmp/kubeadm-join-command else log_info "다음 명령어로 join 토큰을 다시 생성할 수 있습니다:" echo " kubeadm token create --print-join-command" fi이제 2개의 스크립트로 언제든 쿠버네티스를 재구축할 수 있게 되었다.
데브옵스 · 인프라
・
SPG
・
일프로
・
k8s
・
DevOps
・
SRE
2025. 05. 29.
1
[인프런 워밍업 클럽 4기 - DevOPs] 미션 1. 쿠버네티스 설치 구간별 상태 확인
설치 환경칩 : Apple M4 Mac mini메모리 : 16GBmacOS : Sequoia 15.5가상플랫폼 : UTM UTM 설치는https://mac.getutm.app/ 에서 최신 버전을 다운로드 했다.워밍업 스터디가 시작되기 전 강의를 먼저 구매 후 공부중이였기 때문에 이미 한차례 설치를 했지만 추가적으로 복습 겸 설치를 진행 했다. [OS 설치]UTM 실행 -> 새 가상머신 만들기 선택 -> Start(Virtualize 선택) -> Linux 선택 -> 부팅 이미지 (Rocky 9.2)-> 메모리 4096 MB, CPU 4 Cores , 용량은 32GB, 공유 폴더는 선택 안함, -> Summary에서 이름은 ks-master [UTM ›네트워크 설정][Rocky Linux 설정]언어 : 한국어 (대한민국)[시스템]파티션 설정은 자동 구성으로 두고 완료를 선택한다.[KDUMP]Default 값은 활성화 되어 있지만 체크를 해지한다.[네트워크 및 호스트 이름]설정을 누른 후 내용 작성을 진행한다.[사용자 설정 - root 비밀번호]모든 설정이 완료되면 설치 시작(B)를 눌러 설치를 시작한다. (약 4분 소요)...(설치 과정 생략) [iterm을 통한 원격 접속]발생한 메시지는 동일한 IP를 가진 k8s-master 정보가 이미 있기 때문에 호스트 키 불일치를 감지해서 발생했다.이런 경우에는 ssh-keygen -R 명령을 통해 특정 호스트 키만 삭제하고 재접속을 하면 된다.[쿠버네티스 설치 전 환경설정]설치 전 미리 설정이 되어 있는 부분만 별도 스크립트로 작성 및 검증을 하고 쿠버네티스 설치로 넘어간다. 스크립트를 작성한 이유는 재사용성을 위해서 작성했다.[쿠버네티스 설치]
데브옵스 · 인프라
・
Kubernetes
・
DevOps
・
MSA
・
k8s
・
인프런
・
SRE
・
일프로