• 카테고리

    질문 & 답변
  • 세부 분야

    데브옵스 · 인프라

  • 해결 여부

    해결됨

vagrant up 오류 문의

23.10.17 10:37 작성 조회수 445

1

안녕하세요. 실습 환경 만들려고 계속 시도중인데요. 며칠 동안 해결이 안되고 있네요.

 

-환경-

윈도우10

virtualbox : 6.1.26

vagrant : 2.2.18

 

https://kubetm.github.io/k8s/02-beginner/cluster-install-case6/

이 링크 보면서 설치하고 있거든요.

 

몇 번을 지우고 재설치해도 성공을 못하고 있습니다.

도움 부탁드립니다.

 

\k8s>vagrant up

Bringing machine 'k8s-master' up with 'virtualbox' provider...

Bringing machine 'k8s-node1' up with 'virtualbox' provider...

Bringing machine 'k8s-node2' up with 'virtualbox' provider...

==> k8s-master: Importing base box 'centos/7'...

==> k8s-master: Matching MAC address for NAT networking...

==> k8s-master: Checking if box 'centos/7' version '2004.01' is up to date...

==> k8s-master: Setting the name of the VM: k8s_k8s-master_1697506308375_93844

==> k8s-master: Clearing any previously set network interfaces...

==> k8s-master: Preparing network interfaces based on configuration...

k8s-master: Adapter 1: nat

k8s-master: Adapter 2: hostonly

==> k8s-master: Forwarding ports...

k8s-master: 22 (guest) => 2222 (host) (adapter 1)

==> k8s-master: Running 'pre-boot' VM customizations...

==> k8s-master: Booting VM...

==> k8s-master: Waiting for machine to boot. This may take a few minutes...

k8s-master: SSH address: 127.0.0.1:2222

k8s-master: SSH username: vagrant

k8s-master: SSH auth method: private key

k8s-master:

k8s-master: Vagrant insecure key detected. Vagrant will automatically replace

k8s-master: this with a newly generated keypair for better security.

k8s-master:

k8s-master: Inserting generated public key within guest...

k8s-master: Removing insecure key from the guest if it's present...

k8s-master: Key inserted! Disconnecting and reconnecting using new SSH key...

==> k8s-master: Machine booted and ready!

[k8s-master] No Virtualbox Guest Additions installation found.

Loaded plugins: fastestmirror

Loading mirror speeds from cached hostfile

* base: mirrors.cat.net

* extras: ftp.riken.jp

* updates: ftp.riken.jp

Resolving Dependencies

--> Running transaction check

---> Package centos-release.x86_64 0:7-8.2003.0.el7.centos will be updated

---> Package centos-release.x86_64 0:7-9.2009.1.el7.centos will be an update

--> Finished Dependency Resolution

Dependencies Resolved

================================================================================

Package Arch Version Repository Size

================================================================================

Updating:

centos-release x86_64 7-9.2009.1.el7.centos updates 27 k

Transaction Summary

================================================================================

Upgrade 1 Package

Total download size: 27 k

Downloading packages:

No Presto metadata available for updates

warning: /var/cache/yum/x86_64/7/updates/packages/centos-release-7-9.2009.1.el7.centos.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY

Public key for centos-release-7-9.2009.1.el7.centos.x86_64.rpm is not installed

Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

Importing GPG key 0xF4A80EB5:

Userid : "CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>"

Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5

Package : centos-release-7-8.2003.0.el7.centos.x86_64 (@anaconda)

From : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

Updating : centos-release-7-9.2009.1.el7.centos.x86_64 1/2

Cleanup : centos-release-7-8.2003.0.el7.centos.x86_64 2/2

Verifying : centos-release-7-9.2009.1.el7.centos.x86_64 1/2

Verifying : centos-release-7-8.2003.0.el7.centos.x86_64 2/2

Updated:

centos-release.x86_64 0:7-9.2009.1.el7.centos

Complete!

Loaded plugins: fastestmirror

Loading mirror speeds from cached hostfile

* base: mirrors.cat.net

* extras: ftp.riken.jp

* updates: ftp.riken.jp

http://vault.centos.org/7.0.1406/os/x86_64/repodata/repomd.xml: [Errno 14] HTTPS Error 301 - Moved Permanently

Trying other mirror.

One of the configured repositories failed (CentOS-7.0.1406 - Base),

and yum doesn't have enough cached data to continue. At this point the only

safe thing yum can do is fail. There are a few ways to work "fix" this:

1. Contact the upstream for the repository and get them to fix the problem.

2. Reconfigure the baseurl/etc. for the repository, to point to a working

upstream. This is most often useful if you are using a newer

distribution release than is supported by the repository (and the

packages for the previous distribution release still work).

3. Run the command with the repository temporarily disabled

yum --disablerepo=C7.0.1406-base ...

4. Disable the repository permanently, so yum won't use it by default. Yum

will then just ignore the repository until you permanently enable it

again or use --enablerepo for temporary usage:

yum-config-manager --disable C7.0.1406-base

or

subscription-manager repos --disable=C7.0.1406-base

5. Configure the failing repository to be skipped, if it is unavailable.

Note that yum will try to contact the repo. when it runs most commands,

so will have to try and fail each time (and thus. yum will be be much

slower). If it is a very temporary problem though, this is often a nice

compromise:

yum-config-manager --save --setopt=C7.0.1406-base.skip_if_unavailable=true

failure: repodata/repomd.xml from C7.0.1406-base: [Errno 256] No more mirrors to try.

http://vault.centos.org/7.0.1406/os/x86_64/repodata/repomd.xml: [Errno 14] HTTPS Error 301 - Moved Permanently

==> k8s-master: Checking for guest additions in VM...

k8s-master: No guest additions were detected on the base box for this VM! Guest

k8s-master: additions are required for forwarded ports, shared folders, host only

k8s-master: networking, and more. If SSH fails on this machine, please install

k8s-master: the guest additions and repackage the box to continue.

k8s-master:

k8s-master: This is not an error message; everything may continue to work properly,

k8s-master: in which case you may ignore this message.

The following SSH command responded with a non-zero exit status.

Vagrant assumes that this means the command failed!

yum install -y kernel-devel-`uname -r` --enablerepo=C*-base --enablerepo=C*-updates

Stdout from the command:

Loaded plugins: fastestmirror

Loading mirror speeds from cached hostfile

* base: mirrors.cat.net

* extras: ftp.riken.jp

* updates: ftp.riken.jp

Stderr from the command:

http://vault.centos.org/7.0.1406/os/x86_64/repodata/repomd.xml: [Errno 14] HTTPS Error 301 - Moved Permanently

Trying other mirror.

One of the configured repositories failed (CentOS-7.0.1406 - Base),

and yum doesn't have enough cached data to continue. At this point the only

safe thing yum can do is fail. There are a few ways to work "fix" this:

1. Contact the upstream for the repository and get them to fix the problem.

2. Reconfigure the baseurl/etc. for the repository, to point to a working

upstream. This is most often useful if you are using a newer

distribution release than is supported by the repository (and the

packages for the previous distribution release still work).

3. Run the command with the repository temporarily disabled

yum --disablerepo=C7.0.1406-base ...

4. Disable the repository permanently, so yum won't use it by default. Yum

will then just ignore the repository until you permanently enable it

again or use --enablerepo for temporary usage:

yum-config-manager --disable C7.0.1406-base

or

subscription-manager repos --disable=C7.0.1406-base

5. Configure the failing repository to be skipped, if it is unavailable.

Note that yum will try to contact the repo. when it runs most commands,

so will have to try and fail each time (and thus. yum will be be much

slower). If it is a very temporary problem though, this is often a nice

compromise:

yum-config-manager --save --setopt=C7.0.1406-base.skip_if_unavailable=true

failure: repodata/repomd.xml from C7.0.1406-base: [Errno 256] No more mirrors to try.

http://vault.centos.org/7.0.1406/os/x86_64/repodata/repomd.xml: [Errno 14] HTTPS Error 301 - Moved Permanently

C:\Users\SungwookCho\k8s>

답변 7

·

답변을 작성해보세요.

0

disderi님의 프로필

disderi

질문자

2023.10.23

회사 내부망이라서 인증오류가 발생한 듯 보이네요.

더이상 시간을 끌 수 없어서 포기하고 ec2에 동일한 환경으로 구축했습니다.

대시보드 접속까지 확인했습니다.

도움 감사합니다.

강의 보면서 추가 질문 드리도록 하겠습니다.

0

disderi님의 프로필

disderi

질문자

2023.10.17

알려주신 명령으로 삭제 / 생성했습니다. 그리고 node2로 확인되어 node2에서 명령 실행했으나 에러가 발생하네요

[root@k8s-master ~]# kubectl get pods -A -o wide
NAMESPACE         NAME                                 READY   STATUS         RESTARTS      AGE   IP              NODE         NOMINATED NODE   READINESS GATES
kube-system       coredns-78fcd69978-d42t6             0/1     Pending        0             97m   <none>          <none>       <none>           <none>
kube-system       coredns-78fcd69978-h7pb5             0/1     Pending        0             97m   <none>          <none>       <none>           <none>
kube-system       etcd-k8s-master                      1/1     Running        1 (62m ago)   97m   192.168.56.30   k8s-master   <none>           <none>
kube-system       kube-apiserver-k8s-master            1/1     Running        1 (62m ago)   97m   192.168.56.30   k8s-master   <none>           <none>
kube-system       kube-controller-manager-k8s-master   1/1     Running        1 (62m ago)   97m   192.168.56.30   k8s-master   <none>           <none>
kube-system       kube-proxy-5sgbs                     1/1     Running        1 (62m ago)   97m   192.168.56.30   k8s-master   <none>           <none>
kube-system       kube-proxy-p464f                     1/1     Running        1 (62m ago)   79m   192.168.56.32   k8s-node2    <none>           <none>
kube-system       kube-proxy-xb4h6                     1/1     Running        1 (62m ago)   79m   192.168.56.31   k8s-node1    <none>           <none>
kube-system       kube-scheduler-k8s-master            1/1     Running        1 (62m ago)   97m   192.168.56.30   k8s-master   <none>           <none>
tigera-operator   tigera-operator-cffd8458f-srgq5      0/1     ErrImagePull   0             42s   192.168.56.32   k8s-node2    <none>           <none>
[root@k8s-master ~]# kubectl get pods -A -o wide
NAMESPACE         NAME                                 READY   STATUS             RESTARTS      AGE   IP              NODE         NOMINATED NODE   READINESS GATES
kube-system       coredns-78fcd69978-d42t6             0/1     Pending            0             98m   <none>          <none>       <none>           <none>
kube-system       coredns-78fcd69978-h7pb5             0/1     Pending            0             98m   <none>          <none>       <none>           <none>
kube-system       etcd-k8s-master                      1/1     Running            1 (63m ago)   98m   192.168.56.30   k8s-master   <none>           <none>
kube-system       kube-apiserver-k8s-master            1/1     Running            1 (63m ago)   98m   192.168.56.30   k8s-master   <none>           <none>
kube-system       kube-controller-manager-k8s-master   1/1     Running            1 (63m ago)   98m   192.168.56.30   k8s-master   <none>           <none>
kube-system       kube-proxy-5sgbs                     1/1     Running            1 (63m ago)   98m   192.168.56.30   k8s-master   <none>           <none>
kube-system       kube-proxy-p464f                     1/1     Running            1 (63m ago)   80m   192.168.56.32   k8s-node2    <none>           <none>
kube-system       kube-proxy-xb4h6                     1/1     Running            1 (63m ago)   80m   192.168.56.31   k8s-node1    <none>           <none>
kube-system       kube-scheduler-k8s-master            1/1     Running            1 (63m ago)   98m   192.168.56.30   k8s-master   <none>           <none>
tigera-operator   tigera-operator-cffd8458f-srgq5      0/1     ImagePullBackOff   0             89s   192.168.56.32   k8s-node2    <none>           <none>
[root@k8s-master ~]#

 

[root@k8s-node2 ~]# docker pull quay.io/tigera/operator:v1.29.0
v1.29.0: Pulling from tigera/operator
749705320018: Pulling fs layer
fc21612f0d50: Pulling fs layer
e5c5babd8e74: Pulling fs layer
c9aed26feef0: Waiting
aa46afbc1309: Waiting
b39dee2bcafd: Waiting
781629545639: Waiting
error pulling image configuration: Get "https://cdn03.quay.io/sha256/34/343ea4f89a32c8f197173c5d9f1ad64eb033df452c5b89a65877d8d3cfa692b1?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20231017%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20231017T075552Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=52bfe5e89f7195145f8fba675b3b082f44c6f540eeecc3be6ca3fb16d2c53a8d&cf_sign=mHdRmauN8PoTOTxLhrR%2BeI6wURLJ3hQMiPz8R1%2FMM5gTdTjgtGdu8PcuMpSemMpFPu%2B%2Fl4fYpV4pnjsBI7FdJhOoiU8SZyVEpbJSmL%2FZISY880UquwT6D7cNQTw%2FvXlFw%2BKTC%2BI5BonKCj7h1zRXAehFk3F0UT8VNDRLN7AbzAsp6JN8moMDry7ATZ%2Bcfd%2FUaZrcqqQdFuyrQdyq2o4sHZVa0oj1ctVlCm8jWfuOJS7Ob2Vuo1B39WRqyqiwMjznjAZEnvoRJdXBv6vH%2BSFtUqlDSQE%2FBquSQYhmg%2FJe%2B300qus31n9gi2XJQj%2FtLmAKzPK6Vj8x0MfzvOWrbNn7lQ%3D%3D&cf_expiry=1697529952&region=us-east-1": x509: certificate signed by unknown authority

 

 

해당 에러 케이스를 검색해보니, 사내 네트워크 일 경우 인증 에러들이 많이 발생하는 것 같은데

현재 사내 네트워크 사용 중 일까요?

 

https://velog.io/@ptah0414/Docker-Error-response-from-daemon-Get-httpsregistry-1.docker.iov2-x509-certificate-signed-by-unknown-authority-%EC%97%90%EB%9F%AC

https://joycecoder.tistory.com/100

0

disderi님의 프로필

disderi

질문자

2023.10.17

그런데...Dashboad페이지가 안뜨네요..

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "services \"kubernetes-dashboard\" not found",
  "reason": "NotFound",
  "details": {
    "name": "kubernetes-dashboard",
    "kind": "services"
  },
  "code": 404
}
[root@k8s-master ~]# kubectl get pod -A
NAMESPACE         NAME                                 READY   STATUS         RESTARTS   AGE
kube-system       coredns-78fcd69978-d42t6             0/1     Pending        0          27m
kube-system       coredns-78fcd69978-h7pb5             0/1     Pending        0          27m
kube-system       etcd-k8s-master                      1/1     Running        0          27m
kube-system       kube-apiserver-k8s-master            1/1     Running        0          27m
kube-system       kube-controller-manager-k8s-master   1/1     Running        0          27m
kube-system       kube-proxy-5sgbs                     1/1     Running        0          27m
kube-system       kube-proxy-p464f                     1/1     Running        0          9m34s
kube-system       kube-proxy-xb4h6                     1/1     Running        0          9m41s
kube-system       kube-scheduler-k8s-master            1/1     Running        0          27m
tigera-operator   tigera-operator-cffd8458f-7zvq9      0/1     ErrImagePull   0          27m
[root@k8s-master ~]# kubectl get pod -A
NAMESPACE         NAME                                 READY   STATUS             RESTARTS   AGE
kube-system       coredns-78fcd69978-d42t6             0/1     Pending            0          29m
kube-system       coredns-78fcd69978-h7pb5             0/1     Pending            0          29m
kube-system       etcd-k8s-master                      1/1     Running            0          30m
kube-system       kube-apiserver-k8s-master            1/1     Running            0          30m
kube-system       kube-controller-manager-k8s-master   1/1     Running            0          30m
kube-system       kube-proxy-5sgbs                     1/1     Running            0          29m
kube-system       kube-proxy-p464f                     1/1     Running            0          11m
kube-system       kube-proxy-xb4h6                     1/1     Running            0          12m
kube-system       kube-scheduler-k8s-master            1/1     Running            0          30m
tigera-operator   tigera-operator-cffd8458f-7zvq9      0/1     ImagePullBackOff   0          29m

coredns가 pending상태이고, tigera-operator는 ImagePullBackOff / ErrImagePull 왔다갔다 하네요.

 [root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES                  AGE   VERSION
k8s-master   NotReady   control-plane,master   32m   v1.22.0
k8s-node1    NotReady   <none>                 14m   v1.22.0
k8s-node2    NotReady   <none>                 14m   v1.22.0

이건 어떻게 해결해야 할까요?

 

 

일단 해당 이미지는 정상적으로 올라가져 있는건 확인했습니다.

한번 아래 명령어로 삭제 하시고, 다시 만들어 보시겠어요?

# 삭제
kubectl delete -f https://raw.githubusercontent.com/kubetm/kubetm.github.io/master/yamls/k8s-install/calico.yaml
kubectl delete -f https://raw.githubusercontent.com/kubetm/kubetm.github.io/master/yamls/k8s-install/calico-custom.yaml

# 생성
kubectl create -f https://raw.githubusercontent.com/kubetm/kubetm.github.io/master/yamls/k8s-install/calico.yaml
kubectl create -f https://raw.githubusercontent.com/kubetm/kubetm.github.io/master/yamls/k8s-install/calico-custom.yaml

 

그래도 안되면 kubectl get pods -A -o wide 명령을 날려보면 해당 Pod가 어느 노드에 생기는지 확인이 됩니다. 아마 Master Node 일 가능성이 높긴한데, 해당 노드에서 아래 명령으로 직접 이미지를 다운 받아 보세요.

docker pull quay.io/tigera/operator:v1.29.0

 

0

SSL 문제가 해결 되야겠군요.

아래 내용을 찾았습니다.

https://github.com/hashicorp/vagrant/issues/13156

 

혹시 vagrant 를 새로 설치한게 아니라, 기존에 낮은 버전에 Vagrant가 설치되어 있었나요?

일단 위 내용에서 해결 방법 2가지를 알려주네요.

첫번째

vagrant plugin install --plugin-clean-sources --plugin-source https://rubygems.org vagrant-*

두번째,

drop https://curl.se/ca/cacert.pem in "C:\HashiCorp\Vagrant\embedded"
disderi님의 프로필

disderi

질문자

2023.10.17

감사합니다.

정확히 어느 조치로 해결이 된건지 모르겠지만 vm설치해서 3대 모두 부팅 및 접속까지 확인했습니다.

 

위에 링크 댓글 중 인증서를 강제로 넣었다는 글이 있어서 그대로 했구요. plugin 설치 시 https를 http로 고치니 인증서 에러는 발생했지만 plugin 설치는 됐습니다.

 

그리고 Vagrantfile 파일도 수정했습니다. 처음에는 vbguest plugin이 제대로 설치되지 않았는지 오류가 났었으나, plugin 설치된 후 vagrant up 명령이 실행됐습니다.

 curl -o "C:\HashiCorp\Vagrant\embedded\cacert.pem" https://curl.se/ca/cacert.pem
 vagrant plugin install --plugin-clean-sources --plugin-source http://rubygems.org vagrant-reload vagrant-vbguest winrm winrm-elevated

\k8s>vagrant plugin list
vagrant-reload (0.0.1, global)
vagrant-vbguest (0.31.0, global)

  config.vm.box = "centos/7"
  config.vm.synced_folder "./", "/vagrant", disabled: true
  config.vbguest.installer_options = { allow_kernel_upgrade: true }

0

그래도 안되면 아래와 같이 해보세요.

현재 설치된 vagrant-vbguest를 삭제하고, 버전을 지정해서 plugin을 설치하는 겁니다.

vagrant plugin uninstall vagrant-vbguest
vagrant plugin install vagrant-vbguest --plugin-version 0.21

0

Vagrantfile 을 열어서 installer_options 내용을 넣어보시겠어요?

config.vm.synced_folder 아래 넣으시면 됩니다.

config.vm.synced_folder "./", "/vagrant", disabled: true
config.vbguest.installer_options = { allow_kernel_upgrade: true }

 

0

안녕하세요.

에고, 진작 질문을 올리시지요.

설치가 안되는 게 스트레스가 제일 큰데 고생 많으셨겠습니다.

참 설치에 대한 문의가 많이 올라오는데 매번 새로운 에러 로그를 보는것도 신기하네요

이렇게 길게 에러 내용이 찍히는건 처음봅니다.

일단 하나씩 확인해봐야 할 것 같은데요.

 

먼저 vagrant up을 하기 전에 vagrant plugin install vagrant-vbguest 를 하셨을까요?

최근 설치시 [[k8s-master] No Virtualbox Guest Additions installation found.] 에러 로그가 나는 문제 때문에 설치 내용을 변경하였습니다.

C:\Users\사용자>mkdir k8s
C:\Users\사용자>cd k8s 
C:\Users\사용자\k8s> curl -O https://kubetm.github.io/yamls/k8s-install/Vagrantfile
C:\Users\사용자\k8s> vagrant plugin install vagrant-vbguest

 

혹시 안해보셨다면, 폴더를 통으로 삭제하고 다시 진행해 보시고요.

했는데도 그런 에러가 나오면 다시 답변 주시면 감사하겠습니다.

 

disderi님의 프로필

disderi

질문자

2023.10.17

네. k8s폴더를 지우고 재시도 해봤습니다.

 

시도하면 아래처럼 plugin 설치가 되지 않았습니다.

C:\Users\사용자\k8s>vagrant plugin install vagrant-vbguest
Installing the 'vagrant-vbguest' plugin. This can take a few minutes...
ERROR:  SSL verification error at depth 2: unable to get local issuer certificate (20)
ERROR:  You must add /C=US/ST=California/L=San Jose/O=Zscaler Inc./OU=Zscaler Inc./CN=Zscaler Root CA/emailAddress=support@zscaler.com to your local trusted store
Vagrant failed to load a configured plugin source. This can be caused
by a variety of issues including: transient connectivity issues, proxy
filtering rejecting access to a configured plugin source, or a configured
plugin source not responding correctly. Please review the error message
below to help resolve the issue:

  SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get local issuer certificate) (https://rubygems.org/specs.4.8.gz)

 

그래서 구글링 해보니 인증 오류시

C:\HashiCorp\Vagrant\embedded\gems\2.2.18\gems\vagrant-2.2.18\plugins\commands\plugin\command\mixin_install_opts.rb 파일을 수정해보라고 되어있어서 아래처럼 수정했습니다.

module VagrantPlugins
  module CommandPlugin
    module Command
      module MixinInstallOpts
        def build_install_opts(o, options)
          options[:plugin_sources] = [
            "http://rubygems.org",
            "http://gems.hashicorp.com",
          ]

          o.on("--entry-point NAME", String,
               "The name of the entry point file for loading the plugin.") do |entry_point|
            options[:entry_point] = entry_point
          end

          o.on("--plugin-clean-sources",
            "Remove all plugin sources defined so far (including defaults)") do |clean|
            options[:plugin_sources] = [] if clean
          end

          o.on("--plugin-source PLUGIN_SOURCE", String,
               "Add a RubyGems repository source") do |plugin_source|
            options[:plugin_sources] << plugin_source
          end

          o.on("--plugin-version PLUGIN_VERSION", String,
               "Install a specific version of the plugin") do |plugin_version|
            options[:plugin_version] = plugin_version
          end
        end
      end
    end
  end
end
disderi님의 프로필

disderi

질문자

2023.10.17

그 후 plugin 설치를 시도하면 아래처럼 설치가 됐습니다.

C:\Users\사용자>mkdir k8s

C:\Users\사용자>cd k8s

C:\Users\사용자\k8s>curl -O https://kubetm.github.io/yamls/k8s-install/Vagrantfile
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  4240  100  4240    0     0   6947      0 --:--:-- --:--:-- --:--:--  6962

C:\Users\사용자\k8s>vagrant plugin install vagrant-vbguest
==> vagrant: A new version of Vagrant is available: 2.4.0 (installed version: 2.2.18)!
==> vagrant: To upgrade visit: https://www.vagrantup.com/downloads.html

Installing the 'vagrant-vbguest' plugin. This can take a few minutes...
ERROR:  SSL verification error at depth 2: unable to get local issuer certificate (20)
ERROR:  You must add /C=US/ST=California/L=San Jose/O=Zscaler Inc./OU=Zscaler Inc./CN=Zscaler Root CA/emailAddress=support@zscaler.com to your local trusted store
Fetching micromachine-3.0.0.gem
Fetching vagrant-vbguest-0.31.0.gem
Installed the plugin 'vagrant-vbguest (0.31.0)'!
disderi님의 프로필

disderi

질문자

2023.10.17

그 이후 vagrant up 하면 아래처럼 에러가 발생하네요.

C:\Users\사용자\k8s>vagrant up
Bringing machine 'k8s-master' up with 'virtualbox' provider...
Bringing machine 'k8s-node1' up with 'virtualbox' provider...
Bringing machine 'k8s-node2' up with 'virtualbox' provider...
==> k8s-master: Box 'centos/7' could not be found. Attempting to find and install...
    k8s-master: Box Provider: virtualbox
    k8s-master: Box Version: >= 0
==> k8s-master: Loading metadata for box 'centos/7'
    k8s-master: URL: https://vagrantcloud.com/centos/7
==> k8s-master: Adding box 'centos/7' (v2004.01) for provider: virtualbox
    k8s-master: Downloading: https://vagrantcloud.com/centos/boxes/7/versions/2004.01/providers/virtualbox/unknown/vagrant.box
Download redirected to host: cloud.centos.org
    k8s-master:
    k8s-master: Calculating and comparing box checksum...
==> k8s-master: Successfully added box 'centos/7' (v2004.01) for 'virtualbox'!
==> k8s-master: Importing base box 'centos/7'...
==> k8s-master: Matching MAC address for NAT networking...
==> k8s-master: Checking if box 'centos/7' version '2004.01' is up to date...
==> k8s-master: Setting the name of the VM: k8s_k8s-master_1697513201415_65754
==> k8s-master: Clearing any previously set network interfaces...
==> k8s-master: Preparing network interfaces based on configuration...
    k8s-master: Adapter 1: nat
    k8s-master: Adapter 2: hostonly
==> k8s-master: Forwarding ports...
    k8s-master: 22 (guest) => 2222 (host) (adapter 1)
==> k8s-master: Running 'pre-boot' VM customizations...
==> k8s-master: Booting VM...
==> k8s-master: Waiting for machine to boot. This may take a few minutes...
    k8s-master: SSH address: 127.0.0.1:2222
    k8s-master: SSH username: vagrant
    k8s-master: SSH auth method: private key
    k8s-master:
    k8s-master: Vagrant insecure key detected. Vagrant will automatically replace
    k8s-master: this with a newly generated keypair for better security.
    k8s-master:
    k8s-master: Inserting generated public key within guest...
    k8s-master: Removing insecure key from the guest if it's present...
    k8s-master: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-master: Machine booted and ready!
[k8s-master] No Virtualbox Guest Additions installation found.
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: ftp.jaist.ac.jp
 * extras: ftp.jaist.ac.jp
 * updates: ftp.jaist.ac.jp
Resolving Dependencies
--> Running transaction check
---> Package centos-release.x86_64 0:7-8.2003.0.el7.centos will be updated
---> Package centos-release.x86_64 0:7-9.2009.1.el7.centos will be an update
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package             Arch        Version                     Repository    Size
================================================================================
Updating:
 centos-release      x86_64      7-9.2009.1.el7.centos       updates       27 k

Transaction Summary
================================================================================
Upgrade  1 Package

Total download size: 27 k
Downloading packages:
No Presto metadata available for updates
warning: /var/cache/yum/x86_64/7/updates/packages/centos-release-7-9.2009.1.el7.centos.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
Public key for centos-release-7-9.2009.1.el7.centos.x86_64.rpm is not installed
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Importing GPG key 0xF4A80EB5:
 Userid     : "CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>"
 Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5
 Package    : centos-release-7-8.2003.0.el7.centos.x86_64 (@anaconda)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Updating   : centos-release-7-9.2009.1.el7.centos.x86_64                  1/2
  Cleanup    : centos-release-7-8.2003.0.el7.centos.x86_64                  2/2
  Verifying  : centos-release-7-9.2009.1.el7.centos.x86_64                  1/2
  Verifying  : centos-release-7-8.2003.0.el7.centos.x86_64                  2/2

Updated:
  centos-release.x86_64 0:7-9.2009.1.el7.centos

Complete!
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: ftp.jaist.ac.jp
 * extras: ftp.jaist.ac.jp
 * updates: ftp.jaist.ac.jp
No package kernel-devel-3.10.0-1127.el7.x86_64 available.
Error: Nothing to do
Unmounting Virtualbox Guest Additions ISO from: /mnt
umount: /mnt: not mounted
==> k8s-master: Checking for guest additions in VM...
    k8s-master: No guest additions were detected on the base box for this VM! Guest
    k8s-master: additions are required for forwarded ports, shared folders, host only
    k8s-master: networking, and more. If SSH fails on this machine, please install
    k8s-master: the guest additions and repackage the box to continue.
    k8s-master:
    k8s-master: This is not an error message; everything may continue to work properly,
    k8s-master: in which case you may ignore this message.
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

umount /mnt

Stdout from the command:



Stderr from the command:

umount: /mnt: not mounted

C:\Users\사용자\k8s>
C:\Users\사용자\k8s>vagrant plugin list
vagrant-vbguest (0.31.0, global)
disderi님의 프로필

disderi

질문자

2023.10.17

그래서 plugin 버전을 0.21.0으로 바꿔 설치한 결과가 처음 질문에 올린 내용입니다.

C:\Users\사용자\k8s>vagrant plugin install --plugin-version "= 0.21" vagrant-vbguest
==> vagrant: A new version of Vagrant is available: 2.4.0 (installed version: 2.2.18)!
==> vagrant: To upgrade visit: https://www.vagrantup.com/downloads.html

Installing the 'vagrant-vbguest --version '= 0.21'' plugin. This can take a few minutes...
ERROR:  SSL verification error at depth 2: unable to get local issuer certificate (20)
ERROR:  You must add /C=US/ST=California/L=San Jose/O=Zscaler Inc./OU=Zscaler Inc./CN=Zscaler Root CA/emailAddress=support@zscaler.com to your local trusted store
Fetching micromachine-3.0.0.gem
Fetching vagrant-vbguest-0.21.0.gem
Installed the plugin 'vagrant-vbguest (0.21.0)'!