묻고 답해요
164만명의 커뮤니티!! 함께 토론해봐요.
인프런 TOP Writers
-
미해결실전! 스프링 부트와 JPA 활용1 - 웹 애플리케이션 개발
Junit5 fail() 처리
@Test public void 상품주문_재고수량초과() throws Exception { Member member = createMember(); Item item = createBook("CentOS9", 20000, 20); int orderCount = 21; assertThrows(NotEnoughStockException.class, () -> { orderService.order(member.getId(), item.getId(), orderCount); }, "재고 수량 예외가 발생해야 한다."); }안녕하세요 영한님 강의 정말 잘 듣고 있습니다!!강의 코드를 조금씩 변형해가면서 실습을 하고 있는데 테스트 관련 질문이 있어 올립니다.assertThrows의 message 파라미터를 통해 실패 메세지를 출력했는데 이렇게 작성하면 fail 메서드를 생략해도 상관 없나요??
-
해결됨[코드팩토리] [중급] Flutter 진짜 실전! 상태관리, 캐시관리, Code Generation, GoRouter, 인증로직 등 중수가 되기 위한 필수 스킬들!
analysis_options.yaml 파일 rules 세팅 질문입니다.
TextStyle 에 자꾸 밑줄이 쳐저서, 검색을 해봤더니 analysis_options.yaml 에 rules 를 세팅해야 된다고 하더라구요. 혹시 세팅한 세팅값을 알 수 있을까요?
-
해결됨프론트엔드 개발환경의 이해와 실습 (webpack, babel, eslint..)
개발환경에서 assets 파일 참조관련 질문
요약개발환경에서 src/assets/.... 에 있는 이미지 파일을 제대로 참조하는 방법이 궁금합니다. 구성요소프로젝트의 구성요소는 아래와 같습니다.public[index.html, favicon.ico]src[assets[image0, image1...], index.js 등] 설치된 패키지는 아래와 같습니다. "webpack": "^5.75.0", "webpack-cli": "^5.0.1", "webpack-dev-server": "^4.11.1" // 본 강의에선 4.x.x 버전을 사용하지만... // 5 version을 공부해야해서... 죄송합니다 😥 설명dev server를 실행시켜 개발할 때,js 파일을 수정하면 바로 반영이 되는 걸 확인했습니다. 그런데 이미지 파일의 경우 다른 파일을 참조하도록 하면 해당 파일을 불러오지 못합니다. 그리고 build된 파일을 참조합니다.예로들어 정적 이미지 파일이 ./src/assets/image_0.jpg 라면,dev server로 실행시켜 확인하면 HOST/dist/assets/images/[hash][ext][query].jpg 이렇게 되어있습니다. (경로가 다름)그리고 build를 하면 분명 assets 디렉토리엔 다수의 이미지 파일이 존재함에도 불구하고 코드에서 사용된 이미지 파일만 build됩니다.그러면 만약 코드내부에서 동적으로 다른 static image 파일을 참조하게 된다면 해당 이미지가 없기 때문에 오류가 날텐데 이런건 어떻게 처리해야하나요? 코드const path = require('path'); const { BannerPlugin, DefinePlugin } = require('webpack'); const childProcess = require('child_process'); const HtmlWebpackPlugin = require('html-webpack-plugin'); const MiniCssExtractPlugin = require('mini-css-extract-plugin'); const isDevMode = (process.env.NODE_ENV || 'development').trim() === 'development'; console.log('is DEV mode?', isDevMode); console.log('__dirname: ', __dirname); module.exports = { mode: isDevMode ? 'development' : 'production', // entry: webpack 시작되는 부분이라고 생각하면 된다. entry: { main: './src/index.js', }, /** * output * entry point를 기준으로 * 모든 .js 파일을 합쳐서 하나의 bundle 파일로 만드는데, * 이걸 어디에 저장할 것인지 지정하는 option */ output: { path: path.resolve(__dirname, 'dist'), filename: isDevMode ? '[name].js' : 'main.[contenthash].js', chunkFilename: '[id].chunk.js', assetModuleFilename: 'images/[hash][ext][query]', clean: true, }, devServer: { port: 3000, hot: true, client: { overlay: { errors: true, warnings: false, }, }, // static: { // directory: path.resolve(__dirname, './src/assets/'), // }, }, /** * module * test에 설정한 파일들을 inspect 하여, * 조건에 맞는 파일들에 대해 loader 들을 실행하여 해석함 */ module: { rules: [ { test: /\.(sa|sc|c)ss$/i, exclude: [/node_modules/], use: [ // creates 'style' nodes from JS strings isDevMode ? 'style-loader' : { loader: MiniCssExtractPlugin.loader, options: { publicPath: '', }, }, // translates css into common JS 'css-loader', 'postcss-loader', // complies sass to css 'sass-loader', ], }, { test: /\.(png|svg|jpg|jpeg|gif)$/i, exclude: [/node_modules/], type: 'asset/resource', parser: { dataUrlCondition: { // 크기가 8kb 미만인 파일은 inline 모듈로 처리되고 그렇지 않으면 resource 모듈로 처리됩니다. maxSize: 4 * 1042, }, }, // generator: { // publicPath: './assets/', // outputPath: './assets/', // }, }, { test: /\.js$/, exclude: [/node_modules/], loader: 'babel-loader', }, { test: /\.(woff|woff2|eot|ttf|otf)$/i, exclude: [/node_modules/], type: 'asset/resource', }, ], }, plugins: [ /** * 개발할 때 API 서버주소, * 배포했을 때 API 서버주소를 설정하는 Plugin */ // new DefinePlugin({ // NODE_ENV: 'development', // }), new BannerPlugin({ banner: `Build Date: ${new Date().toLocaleString()} Commit Version: ${childProcess.execSync('git rev-parse --short HEAD')} Author: ${childProcess.execSync('git config user.name')}`, }), new HtmlWebpackPlugin({ template: './public/index.html', templateParameters: { env: isDevMode ? '개발용' : '배포용', }, minify: !isDevMode ? { collapseWhitespace: true, removeComments: true, } : false, }), ...(!isDevMode ? [ new MiniCssExtractPlugin({ filename: isDevMode ? '[name].css' : '[name].[contenthash].css', chunkFilename: isDevMode ? '[id].css' : '[id].[contenthash].css', }), ] : []), ], }; 결론즉 정리하자면,개발모드일 때 정적 이미지 파일을 참조하도록 설정을 어떻게 해야하나요?왜 build할 땐 이미지 파일이 코드에서 사용중인 것만 빌드 되나요? 답변 주시면 감사하겠습니다.
-
미해결프로그래밍 시작하기 : 파이썬 입문 (Inflearn Original)
강의자료 부탁드립니다.
bluejayy@hanmail.net 입니다. 감사합니다.
-
미해결AWS(Amazon Web Service) 중/상급자를 위한 강의
keyPairName오류
버즈니아 북부로 하고aws cloudformation create-stack --stack-name CodeDeployDemoStack \--template-url https://aws-learner-code-pipeline-practice.s3.amazonaws.com/CF_Template.json \--parameters ParameterKey=InstanceCount,ParameterValue=1 \ParameterKey=InstanceType,ParameterValue=t2.micro \ParameterKey=KeyPairName,ParameterValue= \ParameterKey=OperatingSystem,ParameterValue=Linux \ParameterKey=SSHLocation,ParameterValue=0.0.0.0/0 \ParameterKey=TagKey,ParameterValue=Name \ParameterKey=TagValue,ParameterValue=CodeDeployDemo \--capabilities CAPABILITY_IAM 터미널에 입력했더니An error occurred (ValidationError) when calling the CreateStack operation: Parameter KeyPairName failed to satisfy constraint: KeyPairName is a required Field and can contain only ASCII characters.자꾸 이 에러가 뜹니다..정책도 다 맞게했고 키페어 생성도했고 chmod 400으로 권한 부여?했는데도 안되네요 왜 그런걸까요ㅜㅜ
-
미해결[C#과 유니티로 만드는 MMORPG 게임 개발 시리즈] Part1: C# 기초 프로그래밍 입문
[TextRPG2 플레이어 생성] 생성자 함수호출 질문
TextRPG2 플레이어 생성 강의 12:40초쯤에나이트 생성자 안에 SetInfo함수가 호출되는데그냥 나이트 클래스 내부에는 SetInfo함수 호출이 안되는 이유가 뭔가요? 상속된거 아닌가요?
-
미해결핵심만 쉽게, 모두의 SQL 비법 레시피
모두의 SQL 강의 질문
안녕하세요. 모두의 SQL 비법레시피 강의 유익하게 잘들었습니다. 이 강의를 2~3번 반복하고나서, 모두의 SQL 데이터 분석도 이어서 강의를 들으려고 하는데 기존에 오라클이 설치 되어 있는데 버전은 아무거나 설치해도 되는지 궁금합니다. 버전때문에 혹시나 수업전에 막힐까봐 걱정됩니다.
-
미해결웹 게임을 만들며 배우는 React
React.Fragment
React.Fragment로 변경을 해도 elements에서 div태그가 남아있습니다.새로 고침을 해도 변경이 되지 않아요
-
해결됨[코드팩토리] [중급] Flutter 진짜 실전! 상태관리, 캐시관리, Code Generation, GoRouter, 인증로직 등 중수가 되기 위한 필수 스킬들!
type: BottomNavigationBarType.fixed, 으로 변경시 메뉴의 label 이 나오는 이유가 무엇인가요?
type: BottomNavigationBarType.shifting, 때는 메뉴의 label 이 나오지 않다가type: BottomNavigationBarType.fixed, 으로 변경시 메뉴의 label 이 나오는 이유가 무엇인가요?
-
미해결mongoDB 기초부터 실무까지(feat. Node.js)
key에 점이 들어갈때 검색이 안나오는데 이유가 있나요?
db.users.insertOne({name:{first:"Elon", last:"Musk"}}) { _id: ObjectId("63a00859907755c4cf9829a3"), name: { first: 'Elon', last: 'Musk' } } db.users.insertOne({"name.first":"Elon", "name.last":"Musk"}) { acknowledged: true, insertedId: ObjectId("63a00bd9907755c4cf9829a7") } db.users.findOne({"name.first":"Elon"}) { _id: ObjectId("63a00859907755c4cf9829a3"), name: { first: 'Elon', last: 'Musk' } }위와 같이 할때 하나만 검색되는지 궁금합니다.name.first 로 만든 key 는 어떻게 검색해야하나요?
-
해결됨[코드캠프] 시작은 프리캠프
divideLine에 margin을 줘서 떨어트리는 방법말고는 없나요?
싸이월드 만들기 5탄에서 divideLine에 옹기종기 붙은 곳을 margin 값을 넣어 서로 분리하셨습니다. 저는 margin에서 임의의 값을 넣어 떨어트리지 않고justify-content: space-between 으로각 영역을 균등하게 떨어트리고 싶은데, wrapper__header 부분이flex-direction: column으로 되어서 justify-content로는 안되는데,혹시 다른방법은 없나요?
-
해결됨나도코딩의 자바 기본편 - 풀코스 (20시간)
no usages 숨기기
이게 3,4 번째 문장위에 작게 나오는데 또 복사 붙여넣기 하면 저건 또 나오질 않아서요.. 실행은 제대로 되던데 숨기기 해도 문제 없을까요??-캡쳐화면package chap_01; public class _01_HelloWorld { public static void main(String[] args) { System.out.println("Hello World!!!"); } }-복붙했을 경우
-
미해결데브옵스(DevOps)를 위한 쿠버네티스 마스터
kubeadm init 후 반복적 재시작 오류
안녕하세요 컴퓨팅 자원 부족으로 aws응 사용해서 kubeadm init 을 시도하여 성공은 했으나 kube-system의 컨트롤플래인 파드들이 지속적으로 재시작되는 에러가 발생하였습니다.에러해결에 조그마한 실마리가되는 내용이라도 주시면 감사하겠습니다!!! ㅠㅠ지금까지의 로그로 추정되는 부분은 kube-apiserver이 재시작되면서 -> 다른 모든 컨트롤 플래인들도 통신 불가로 재시작하는것으로 보입니다.환경os: ubuntu server 22.04 LTS (HVM), SSD Volume Typecpu 2core / memory 4gdisk: root- 10g / 별도마운트 - 20g(docker, k8s사용 공간)도커 버전docker-ce=5:20.10.20~3-0~ubuntu-jammy \docker-ce-cli=5:20.10.20~3-0~ubuntu-jammy \containerd.io=1.6.8-1 \docker-compose-plugin=2.12.0~ubuntu-jammy쿠버네티스 버전kubelet=1.26.0-00 \kubeadm=1.26.0-00 \kubelet=1.26.0-00 docker와 k8s가 사용할 디스크공간인 /container 디렉토리에 20g디스크 별도로 마운트 이미 진행함 /etc/docker/daemon.json 변경{ "data-root": "/container/docker","exec-opts": ["native.cgroupdriver=systemd"] }kubeadm init중 cri관련 오류 발생하여 검색하여 아래내용 주석처리로 해결vi /etc/containerd/config.toml 수정# disabled_plugins = ["cri"]방화벽 해제sudo ufw disableiptable설정아래 링크대로 진행https://kubernetes.io/ko/docs/setup/production-environment/container-runtimes/#ipv4%EB%A5%BC-%ED%8F%AC%EC%9B%8C%EB%94%A9%ED%95%98%EC%97%AC-iptables%EA%B0%80-%EB%B8%8C%EB%A6%AC%EC%A7%80%EB%90%9C-%ED%8A%B8%EB%9E%98%ED%94%BD%EC%9D%84-%EB%B3%B4%EA%B2%8C-%ED%95%98%EA%B8%B0아래링크 참고하여 kubernetes가 사용할 디스크공간 변경https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/#the-kubelet-drop-in-file-for-systemdvi /etc/default/kubelet 에 아래내용 추가KUBELET_EXTRA_ARGS=--root-dir="/container/k8s"kubeadm initkubeadm init --skip-phases=addon/kube-proxykubeadm init으로 진행하였으나 계속된 reset후 다시 init 과정에서 kube-proxy 실패로 아래 에러로그는 kube-proxy단계를 스킵하는 명령어로 진행함 에러로그 첨부가 불가능 하여 아래 남깁니다!ubuntu@ip-10-0-15-82:~$ kubectl get nodeNAME STATUS ROLES AGE VERSIONmaster0 NotReady control-plane 7m10s v1.26.0ubuntu@ip-10-0-15-82:~$ kubectl describe node master0Name: master0Roles: control-planeLabels: beta.kubernetes.io/arch=amd64beta.kubernetes.io/os=linuxkubernetes.io/arch=amd64kubernetes.io/hostname=master0kubernetes.io/os=linuxnode-role.kubernetes.io/control-plane=node.kubernetes.io/exclude-from-external-load-balancers=Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.socknode.alpha.kubernetes.io/ttl: 0volumes.kubernetes.io/controller-managed-attach-detach: trueCreationTimestamp: Mon, 19 Dec 2022 06:03:24 +0000Taints: node-role.kubernetes.io/control-plane:NoSchedulenode.kubernetes.io/not-ready:NoScheduleUnschedulable: falseLease:HolderIdentity: master0AcquireTime: <unset>RenewTime: Mon, 19 Dec 2022 06:13:57 +0000Conditions:Type Status LastHeartbeatTime LastTransitionTime Reason Message---- ------ ----------------- ------------------ ------ -------MemoryPressure False Mon, 19 Dec 2022 06:13:52 +0000 Mon, 19 Dec 2022 06:03:21 +0000 KubeletHasSufficientMemory kubelet has sufficient memory availableDiskPressure False Mon, 19 Dec 2022 06:13:52 +0000 Mon, 19 Dec 2022 06:03:21 +0000 KubeletHasNoDiskPressure kubelet has no disk pressurePIDPressure False Mon, 19 Dec 2022 06:13:52 +0000 Mon, 19 Dec 2022 06:03:21 +0000 KubeletHasSufficientPID kubelet has sufficient PID availableReady False Mon, 19 Dec 2022 06:13:52 +0000 Mon, 19 Dec 2022 06:03:21 +0000 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initializedAddresses:InternalIP: 10.0.15.82Hostname: master0Capacity:cpu: 2ephemeral-storage: 20470Mihugepages-2Mi: 0memory: 4015088Kipods: 110Allocatable:cpu: 2ephemeral-storage: 19317915617hugepages-2Mi: 0memory: 3912688Kipods: 110System Info:Machine ID: f8b760a7c2274e0cb62621465dbcab92System UUID: ec21d23a-a384-2b77-91df-2f108bd6b565Boot ID: 12f267e0-d0f3-4193-b84a-d7dbcfd74b2bKernel Version: 5.15.0-1026-awsOS Image: Ubuntu 22.04.1 LTSOperating System: linuxArchitecture: amd64Container Runtime Version: containerd://1.6.8Kubelet Version: v1.26.0Kube-Proxy Version: v1.26.0Non-terminated Pods: (4 in total)Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age--------- ---- ------------ ---------- --------------- ------------- ---kube-system etcd-master0 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 9m12skube-system kube-apiserver-master0 250m (12%) 0 (0%) 0 (0%) 0 (0%) 10mkube-system kube-controller-manager-master0 200m (10%) 0 (0%) 0 (0%) 0 (0%) 9m7skube-system kube-scheduler-master0 100m (5%) 0 (0%) 0 (0%) 0 (0%) 9m16sAllocated resources:(Total limits may be over 100 percent, i.e., overcommitted.)Resource Requests Limits-------- -------- ------cpu 650m (32%) 0 (0%)memory 100Mi (2%) 0 (0%)ephemeral-storage 0 (0%) 0 (0%)hugepages-2Mi 0 (0%) 0 (0%)Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Starting 10m kubelet Starting kubelet.Warning InvalidDiskCapacity 10m kubelet invalid capacity 0 on image filesystemNormal NodeHasSufficientMemory 10m kubelet Node master0 status is now: NodeHasSufficientMemoryNormal NodeHasNoDiskPressure 10m kubelet Node master0 status is now: NodeHasNoDiskPressureNormal NodeHasSufficientPID 10m kubelet Node master0 status is now: NodeHasSufficientPIDNormal NodeAllocatableEnforced 10m kubelet Updated Node Allocatable limit across podsNormal RegisteredNode 9m37s node-controller Node master0 event: Registered Node master0 in ControllerNormal RegisteredNode 7m10s node-controller Node master0 event: Registered Node master0 in ControllerNormal RegisteredNode 4m57s node-controller Node master0 event: Registered Node master0 in ControllerNormal RegisteredNode 3m11s node-controller Node master0 event: Registered Node master0 in ControllerNormal RegisteredNode 25s node-controller Node master0 event: Registered Node master0 in Controller ubuntu@ip-10-0-15-82:~$ kubectl get po -ANAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-787d4945fb-bkhkm 0/1 Pending 0 6m20skube-system coredns-787d4945fb-d4t28 0/1 Pending 0 6m20skube-system etcd-master0 1/1 Running 20 (78s ago) 5m56skube-system kube-apiserver-master0 1/1 Running 21 (2m22s ago) 7m19skube-system kube-controller-manager-master0 0/1 Running 25 (66s ago) 5m51skube-system kube-scheduler-master0 0/1 CrashLoopBackOff 25 (62s ago) 6mubuntu@ip-10-0-15-82:~$ kubectl logs -f kube-apiserver-master0 -n kube-systemI1219 06:08:44.052941 1 server.go:555] external host was not specified, using 10.0.15.82I1219 06:08:44.053880 1 server.go:163] Version: v1.26.0I1219 06:08:44.053954 1 server.go:165] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""I1219 06:08:44.561040 1 shared_informer.go:273] Waiting for caches to sync for node_authorizerI1219 06:08:44.562267 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.I1219 06:08:44.562350 1 plugins.go:161] Loaded 12 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.W1219 06:08:44.613792 1 genericapiserver.go:660] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.I1219 06:08:44.615115 1 instance.go:277] Using reconciler: leaseI1219 06:08:44.882566 1 instance.go:621] API group "internal.apiserver.k8s.io" is not enabled, skipping.I1219 06:08:45.267941 1 instance.go:621] API group "resource.k8s.io" is not enabled, skipping.W1219 06:08:45.370729 1 genericapiserver.go:660] Skipping API authentication.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.370756 1 genericapiserver.go:660] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.372993 1 genericapiserver.go:660] Skipping API authorization.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.377856 1 genericapiserver.go:660] Skipping API autoscaling/v2beta1 because it has no resources.W1219 06:08:45.377876 1 genericapiserver.go:660] Skipping API autoscaling/v2beta2 because it has no resources.W1219 06:08:45.381127 1 genericapiserver.go:660] Skipping API batch/v1beta1 because it has no resources.W1219 06:08:45.383665 1 genericapiserver.go:660] Skipping API certificates.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.385890 1 genericapiserver.go:660] Skipping API coordination.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.385952 1 genericapiserver.go:660] Skipping API discovery.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.391568 1 genericapiserver.go:660] Skipping API networking.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.391585 1 genericapiserver.go:660] Skipping API networking.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.393562 1 genericapiserver.go:660] Skipping API node.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.393581 1 genericapiserver.go:660] Skipping API node.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.393641 1 genericapiserver.go:660] Skipping API policy/v1beta1 because it has no resources.W1219 06:08:45.399482 1 genericapiserver.go:660] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.399502 1 genericapiserver.go:660] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.401515 1 genericapiserver.go:660] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.401537 1 genericapiserver.go:660] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.407674 1 genericapiserver.go:660] Skipping API storage.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.413355 1 genericapiserver.go:660] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.413374 1 genericapiserver.go:660] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.419343 1 genericapiserver.go:660] Skipping API apps/v1beta2 because it has no resources.W1219 06:08:45.419362 1 genericapiserver.go:660] Skipping API apps/v1beta1 because it has no resources.W1219 06:08:45.421932 1 genericapiserver.go:660] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.421951 1 genericapiserver.go:660] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.424241 1 genericapiserver.go:660] Skipping API events.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.479788 1 genericapiserver.go:660] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.I1219 06:08:46.357006 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"I1219 06:08:46.357217 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"I1219 06:08:46.357675 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key"I1219 06:08:46.358125 1 secure_serving.go:210] Serving securely on [::]:6443I1219 06:08:46.358242 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"I1219 06:08:46.363285 1 gc_controller.go:78] Starting apiserver lease garbage collectorI1219 06:08:46.363570 1 controller.go:80] Starting OpenAPI V3 AggregationControllerI1219 06:08:46.363829 1 controller.go:121] Starting legacy_token_tracking_controllerI1219 06:08:46.363850 1 shared_informer.go:273] Waiting for caches to sync for configmapsI1219 06:08:46.363877 1 apf_controller.go:361] Starting API Priority and Fairness config controllerI1219 06:08:46.363922 1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/etc/kubernetes/pki/front-proxy-client.crt::/etc/kubernetes/pki/front-proxy-client.key"I1219 06:08:46.364009 1 available_controller.go:494] Starting AvailableConditionControllerI1219 06:08:46.364019 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controllerI1219 06:08:46.358328 1 autoregister_controller.go:141] Starting autoregister controllerI1219 06:08:46.364040 1 cache.go:32] Waiting for caches to sync for autoregister controllerI1219 06:08:46.366773 1 controller.go:83] Starting OpenAPI AggregationControllerI1219 06:08:46.367148 1 customresource_discovery_controller.go:288] Starting DiscoveryControllerI1219 06:08:46.367616 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controllerI1219 06:08:46.367725 1 shared_informer.go:273] Waiting for caches to sync for cluster_authentication_trust_controllerI1219 06:08:46.367881 1 apiservice_controller.go:97] Starting APIServiceRegistrationControllerI1219 06:08:46.367970 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controllerI1219 06:08:46.368112 1 crdregistration_controller.go:111] Starting crd-autoregister controllerI1219 06:08:46.368191 1 shared_informer.go:273] Waiting for caches to sync for crd-autoregisterI1219 06:08:46.383719 1 controller.go:85] Starting OpenAPI controllerI1219 06:08:46.383786 1 controller.go:85] Starting OpenAPI V3 controllerI1219 06:08:46.383812 1 naming_controller.go:291] Starting NamingConditionControllerI1219 06:08:46.383830 1 establishing_controller.go:76] Starting EstablishingControllerI1219 06:08:46.383852 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionControllerI1219 06:08:46.383871 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionControllerI1219 06:08:46.383893 1 crd_finalizer.go:266] Starting CRDFinalizerI1219 06:08:46.383978 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"I1219 06:08:46.384084 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"I1219 06:08:46.463884 1 shared_informer.go:280] Caches are synced for configmapsI1219 06:08:46.463927 1 apf_controller.go:366] Running API Priority and Fairness config workerI1219 06:08:46.463935 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing processI1219 06:08:46.464063 1 cache.go:39] Caches are synced for autoregister controllerI1219 06:08:46.465684 1 cache.go:39] Caches are synced for AvailableConditionController controllerI1219 06:08:46.469795 1 shared_informer.go:280] Caches are synced for crd-autoregisterI1219 06:08:46.470150 1 shared_informer.go:280] Caches are synced for node_authorizerI1219 06:08:46.470302 1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controllerI1219 06:08:46.470438 1 cache.go:39] Caches are synced for APIServiceRegistrationController controllerI1219 06:08:46.479224 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.ioI1219 06:08:47.060404 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).I1219 06:08:47.370998 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.W1219 06:09:28.894719 1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {"Addr": "127.0.0.1:2379","ServerName": "127.0.0.1","Attributes": null,"BalancerAttributes": null,"Type": 0,"Metadata": null}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"W1219 06:09:28.895017 1 logging.go:59] [core] [Channel #13 SubChannel #14] grpc: addrConn.createTransport failed to connect to {"Addr": "127.0.0.1:2379","ServerName": "127.0.0.1","Attributes": null,"BalancerAttributes": null,"Type": 0,"Metadata": null}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"===중략===W1219 06:12:22.066087 1 logging.go:59] [core] [Channel #16 SubChannel #17] grpc: addrConn.createTransport failed to connect to {"Addr": "127.0.0.1:2379","ServerName": "127.0.0.1","Attributes": null,"BalancerAttributes": null,"Type": 0,"Metadata": null}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"{"level":"warn","ts":"2022-12-19T06:12:22.345Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:24.346Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:26.352Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:27.457Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00354fc00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}E1219 06:12:27.458799 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeoutE1219 06:12:27.458820 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeoutE1219 06:12:27.458843 1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 6.269µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>E1219 06:12:27.460034 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeoutI1219 06:12:27.461932 1 trace.go:219] Trace[630402872]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:9448a7a5-4c6b-490f-9aff-cd8384091228,client:10.0.15.82,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master0,user-agent:kubelet/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:PUT (19-Dec-2022 06:12:17.458) (total time: 10003ms):Trace[630402872]: ["GuaranteedUpdate etcd3" audit-id:9448a7a5-4c6b-490f-9aff-cd8384091228,key:/leases/kube-node-lease/master0,type:*coordination.Lease,resource:leases.coordination.k8s.io 10003ms (06:12:17.458)Trace[630402872]: ---"Txn call failed" err:context deadline exceeded 9998ms (06:12:27.458)]Trace[630402872]: [10.003519094s] [10.003519094s] ENDE1219 06:12:27.462368 1 timeout.go:142] post-timeout activity - time-elapsed: 3.532362ms, PUT "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master0" result: <nil>{"level":"warn","ts":"2022-12-19T06:12:28.352Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:30.242Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:30.359Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:32.365Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:34.366Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:34.905Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001c45180/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}E1219 06:12:34.905188 1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceededE1219 06:12:34.905331 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeoutE1219 06:12:34.906483 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeoutE1219 06:12:34.907611 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeoutI1219 06:12:34.909171 1 trace.go:219] Trace[1232755934]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:efcbbe67-217b-4534-8361-f0ca8603169e,client:10.0.15.82,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/etcd-master0,user-agent:kubelet/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:11:34.904) (total time: 60004ms):Trace[1232755934]: [1m0.004852843s] [1m0.004852843s] ENDE1219 06:12:34.909377 1 timeout.go:142] post-timeout activity - time-elapsed: 3.983518ms, GET "/api/v1/namespaces/kube-system/pods/etcd-master0" result: <nil>{"level":"warn","ts":"2022-12-19T06:12:36.372Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:37.458Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00354fc00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}E1219 06:12:37.459896 1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceededE1219 06:12:37.460058 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeoutE1219 06:12:37.461117 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeoutE1219 06:12:37.462667 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeoutI1219 06:12:37.464323 1 trace.go:219] Trace[688853594]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:c49906de-6377-43e9-86c6-8f053f5ea689,client:10.0.15.82,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master0,user-agent:kubelet/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:12:27.458) (total time: 10005ms):Trace[688853594]: [10.005594573s] [10.005594573s] ENDE1219 06:12:37.464689 1 timeout.go:142] post-timeout activity - time-elapsed: 5.065927ms, GET "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master0" result: <nil>{"level":"warn","ts":"2022-12-19T06:12:37.984Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001ba8000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}E1219 06:12:37.984376 1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceededE1219 06:12:37.984522 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeoutE1219 06:12:37.985741 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeoutI1219 06:12:37.987578 1 controller.go:615] quota admission added evaluator for: namespacesE1219 06:12:37.988356 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeoutI1219 06:12:37.990053 1 trace.go:219] Trace[931836157]: "Get" accept:application/vnd.kubernetes.protobuf, /,audit-id:90475625-91a7-4e3d-b74c-4c8971819dd4,client:::1,protocol:HTTP/2.0,resource:namespaces,scope:resource,url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:11:37.983) (total time: 60006ms):Trace[931836157]: [1m0.006350485s] [1m0.006350485s] ENDE1219 06:12:37.990484 1 timeout.go:142] post-timeout activity - time-elapsed: 4.870058ms, GET "/api/v1/namespaces/default" result: <nil>{"level":"warn","ts":"2022-12-19T06:12:38.373Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}I1219 06:12:39.988922 1 trace.go:219] Trace[448655361]: "List" accept:application/vnd.kubernetes.protobuf, /,audit-id:8b16c9b2-4f85-4e5d-918a-2d28acd753bb,client:::1,protocol:HTTP/2.0,resource:services,scope:cluster,url:/api/v1/services,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:LIST (19-Dec-2022 06:12:37.659) (total time: 2329ms):Trace[448655361]: ["List(recursive=true) etcd3" audit-id:8b16c9b2-4f85-4e5d-918a-2d28acd753bb,key:/services/specs,resourceVersion:,resourceVersionMatch:,limit:0,continue: 2329ms (06:12:37.659)]Trace[448655361]: [2.329166967s] [2.329166967s] END{"level":"warn","ts":"2022-12-19T06:12:40.242Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}I1219 06:12:40.249474 1 trace.go:219] Trace[239754632]: "List" accept:application/vnd.kubernetes.protobuf, /,audit-id:30b9e937-c36a-4398-9054-4a1cb1bd5edf,client:::1,protocol:HTTP/2.0,resource:resourcequotas,scope:namespace,url:/api/v1/namespaces/default/resourcequotas,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:LIST (19-Dec-2022 06:12:37.988) (total time: 2261ms):Trace[239754632]: ["List(recursive=true) etcd3" audit-id:30b9e937-c36a-4398-9054-4a1cb1bd5edf,key:/resourcequotas/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: 2261ms (06:12:37.988)]Trace[239754632]: [2.261402138s] [2.261402138s] END{"level":"warn","ts":"2022-12-19T06:12:40.380Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:42.386Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}I1219 06:12:42.442272 1 trace.go:219] Trace[1256675541]: "Get" accept:application/vnd.kubernetes.protobuf, /,audit-id:0176b32b-9911-4efd-a652-a65e9b8e5358,client:::1,protocol:HTTP/2.0,resource:namespaces,scope:resource,url:/api/v1/namespaces/kube-system,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:11:57.961) (total time: 44480ms):Trace[1256675541]: ---"About to write a response" 44480ms (06:12:42.442)Trace[1256675541]: [44.480780934s] [44.480780934s] ENDI1219 06:12:42.446847 1 trace.go:219] Trace[1993246150]: "Create" accept:application/vnd.kubernetes.protobuf, /,audit-id:037244a1-0427-4f7b-a27f-a28053080851,client:::1,protocol:HTTP/2.0,resource:namespaces,scope:resource,url:/api/v1/namespaces,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:POST (19-Dec-2022 06:12:37.987) (total time: 4459ms):Trace[1993246150]: ["Create etcd3" audit-id:037244a1-0427-4f7b-a27f-a28053080851,key:/namespaces/default,type:*core.Namespace,resource:namespaces 2195ms (06:12:40.251)Trace[1993246150]: ---"Txn call succeeded" 2194ms (06:12:42.445)]Trace[1993246150]: [4.459769012s] [4.459769012s] ENDI1219 06:12:42.674053 1 trace.go:219] Trace[1794029875]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:e430c809-7f0f-466e-b055-2b6b9141ff8c,client:10.0.15.82,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-controller-manager-master0,user-agent:kubelet/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:12:34.909) (total time: 7765ms):Trace[1794029875]: ---"About to write a response" 7764ms (06:12:42.673)Trace[1794029875]: [7.765007745s] [7.765007745s] END{"level":"warn","ts":"2022-12-19T06:12:44.393Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:44.971Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00354fc00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}I1219 06:12:44.971449 1 trace.go:219] Trace[994491080]: "Update" accept:application/vnd.kubernetes.protobuf, /,audit-id:409a08f2-ec78-4882-9bfa-9ce30a084b98,client:::1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-apiserver-sbw72mnicesx7ail7r675e52gy,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:PUT (19-Dec-2022 06:12:10.970) (total time: 34001ms):Trace[994491080]: ["GuaranteedUpdate etcd3" audit-id:409a08f2-ec78-4882-9bfa-9ce30a084b98,key:/leases/kube-system/kube-apiserver-sbw72mnicesx7ail7r675e52gy,type:*coordination.Lease,resource:leases.coordination.k8s.io 34000ms (06:12:10.970)Trace[994491080]: ---"Txn call failed" err:context deadline exceeded 34000ms (06:12:44.971)]Trace[994491080]: [34.001140432s] [34.001140432s] ENDE1219 06:12:44.971767 1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 10.899µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>E1219 06:12:44.972574 1 controller.go:189] failed to update lease, error: Timeout: request did not complete within requested timeout - context deadline exceededI1219 06:12:46.648431 1 trace.go:219] Trace[1569528607]: "Update" accept:application/vnd.kubernetes.protobuf, /,audit-id:66c304d4-07a7-4651-a080-b0a6fe1514d1,client:::1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-apiserver-sbw72mnicesx7ail7r675e52gy,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:PUT (19-Dec-2022 06:12:44.973) (total time: 1675ms):Trace[1569528607]: ["GuaranteedUpdate etcd3" audit-id:66c304d4-07a7-4651-a080-b0a6fe1514d1,key:/leases/kube-system/kube-apiserver-sbw72mnicesx7ail7r675e52gy,type:*coordination.Lease,resource:leases.coordination.k8s.io 1675ms (06:12:44.973)Trace[1569528607]: ---"Txn call completed" 1674ms (06:12:46.648)]Trace[1569528607]: [1.675226852s] [1.675226852s] ENDI1219 06:12:46.649989 1 trace.go:219] Trace[424403]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:7ce2d09e-ef67-46b0-9359-d7bb18552cd1,client:10.0.15.82,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master0,user-agent:kubelet/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:12:37.660) (total time: 8989ms):Trace[424403]: ---"About to write a response" 8989ms (06:12:46.649)Trace[424403]: [8.989433007s] [8.989433007s] ENDI1219 06:12:49.083394 1 trace.go:219] Trace[50133606]: "Get" accept:application/vnd.kubernetes.protobuf, /,audit-id:790109d6-02cb-46d3-b31f-b1823eea9276,client:::1,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:12:42.453) (total time: 6630ms):Trace[50133606]: ---"About to write a response" 6630ms (06:12:49.083)Trace[50133606]: [6.630185906s] [6.630185906s] END
-
해결됨[리뉴얼] React로 NodeBird SNS 만들기
안녕하세요.
안녕하세요 강의와 연관은 없지만 어디에 질문할지 몰라 여기에 질문 남깁니다. 제가 기존 노드 교과서를 이북으로 구매하여 잘 보고 있습니다. 3판이 나오면 이북도 새로 나오는 건가요? 맞다면 이북 같은 경우 출시 예정일이 언제인지도 궁금합니다. 감사합니다!!
-
미해결프로젝트로 배우는 React.js
검색에서 title_like 중에
- 학습 관련 질문을 남겨주세요. 상세히 작성하면 더 좋아요! - 먼저 유사한 질문이 있었는지 검색해보세요. - 서로 예의를 지키며 존중하는 문화를 만들어가요. - 잠깐! 인프런 서비스 운영 관련 문의는 1:1 문의하기를 이용해주세요. 검색에서 title_like 중에 같은 부분만 색상을 바꿀수있나요~?!nflinflearn <- 이런식으로 결과값에 같은 부분을요 지금은 볼드지만 색상으로... 가능할까요?
-
미해결쉽게 시작하는 쿠버네티스(v1.30) - {{ x86-64, arm64 }}
vagrant up
[질문 전 답변]1. 강의에서 다룬 내용과 관련된 질문인가요? [예 | 아니요]2. 인프런의 질문 게시판과 자주 하는 질문에 없는 내용인가요? [예 | 아니요]3. 질문 잘하기 법을 읽어보셨나요? [예 | 아니요](https://www.inflearn.com/blogs/1719)4. 잠깐! 인프런 서비스 운영 관련 문의는 1:1 문의하기를 이용해주세요.[질문 하기]vagrant up 시에 계속 ssh time out 이 발생하는데요time out 이후에 다시 설치하여 전부 완료가 된 후k8s 하나 들어가서kubectl get nodes명령어를 치면command not found가 발생합니다. 설치가 제대로 안된건가요?회사pc, 회사 mac, 개인 pc 모든 방법을 사용해도 설치가 안됩니다. 이런경우 강의를 못듣게되는데요
-
미해결[코드팩토리] [초급] Flutter 3.0 앱 개발 - 10개의 프로젝트로 오늘 초보 탈출!
run 콘솔 창 질문
안녕하세요 선생님안드로이드 스튜디오 사용하여 강의 수강중인데 run 콘솔창이위와 같이 너무 지저분하게 나와서요,, 깔끔하게 보고싶어서 설정을 하려는데 방법이 잘 안나오는 것 같아서 질문드립니다. 필요로 하는 것 내용들만 필터링 해서 볼 수 있을까요?
-
미해결스프링 입문 - 코드로 배우는 스프링 부트, 웹 MVC, DB 접근 기술
hello-mvc 오류
학습하는 분들께 도움이 되고, 더 좋은 답변을 드릴 수 있도록 질문전에 다음을 꼭 확인해주세요.1. 강의 내용과 관련된 질문을 남겨주세요.2. 인프런의 질문 게시판과 자주 하는 질문(링크)을 먼저 확인해주세요.(자주 하는 질문 링크: https://bit.ly/3fX6ygx)3. 질문 잘하기 메뉴얼(링크)을 먼저 읽어주세요.(질문 잘하기 메뉴얼 링크: https://bit.ly/2UfeqCG)질문 시에는 위 내용은 삭제하고 다음 내용을 남겨주세요.=========================================[질문 템플릿]1. 강의 내용과 관련된 질문인가요? (예/아니오)2. 인프런의 질문 게시판과 자주 하는 질문에 없는 내용인가요? (예/아니오)3. 질문 잘하기 메뉴얼을 읽어보셨나요? (예/아니오)[질문 내용]오류html과 controller return값을 똑같이 적었는데 오류가 뜹니다.
-
해결됨스프링 MVC 1편 - 백엔드 웹 개발 핵심 기술
Port 8080 was already in use 해결에 도움을 요청드려요
Description: Web server failed to start. Port 8080 was already in use.~(1) 정적 HTML페이지를 생성 저장 후, 실행시키니 위와 같은 에러가 나타났습니다. 그래서 터미널에서 sudo kill 을 실행시켰더니 인텔리제이마저 종료되어 버렸습니다.(2) 다시 인텔리제이를 실행시킨 후 application.yml 에서 server.port: 8081 을 추가하고 실행시키니 -> 프로젝트는 이상없이 실행이 되었는데, 정적 html 페이지의 주소가 8080으로 정상적으로 연결되지만 / 8081로 연결하니 '사이트에 연결할 수 없음'으로 나타납니다.여기서부터 이해가 되지 않아 구글링 및 자주하는 질문을 확인해도 답을 찾을 수 없어 문의를 남깁니다.(1) sudo kill -9에서 8080에서의 PID를 제거하니 인텔리제이마저 종료된 것이 무슨 의미일까요?(2) application.yml에 8081을 추가하지 않고도 프로젝트 실행되게 하려면 어떤 조치가 필요할까요?(3) 8081을 추가하고 정적 html은 8080으로 열리는 게 정상인 것일까요? 만약 그렇다면 그 이유에 대한 설명을 부탁드립니다.(참고로 이 현상이 발생되기 전에, 인텔리제이에서 직접 정적 html을 브라우저에 띄우면 -> 포트63342에서 띄워져서 -> 인텔리제이 preferences에서 Built-in Server Port가 63342 로 되어 있는 것을 8080으로 수정하였습니다.)
-
미해결Vue.js 중급 강좌 - 웹앱 제작으로 배워보는 Vue.js, ES6, Vuex
깃헙 권한 요청드립니다~
인프런 아이디: chillycorn인프런 이메일: chillycorn@g.skku.edu깃헙 아이디: happycrab@naver.com깃헙 Username : jjanghee