묻고 답해요
164만명의 커뮤니티!! 함께 토론해봐요.
인프런 TOP Writers
-
미해결mongoDB 기초부터 실무까지(feat. Node.js)
key에 점이 들어갈때 검색이 안나오는데 이유가 있나요?
db.users.insertOne({name:{first:"Elon", last:"Musk"}}) { _id: ObjectId("63a00859907755c4cf9829a3"), name: { first: 'Elon', last: 'Musk' } } db.users.insertOne({"name.first":"Elon", "name.last":"Musk"}) { acknowledged: true, insertedId: ObjectId("63a00bd9907755c4cf9829a7") } db.users.findOne({"name.first":"Elon"}) { _id: ObjectId("63a00859907755c4cf9829a3"), name: { first: 'Elon', last: 'Musk' } }위와 같이 할때 하나만 검색되는지 궁금합니다.name.first 로 만든 key 는 어떻게 검색해야하나요?
-
해결됨[코드캠프] 시작은 프리캠프
divideLine에 margin을 줘서 떨어트리는 방법말고는 없나요?
싸이월드 만들기 5탄에서 divideLine에 옹기종기 붙은 곳을 margin 값을 넣어 서로 분리하셨습니다. 저는 margin에서 임의의 값을 넣어 떨어트리지 않고justify-content: space-between 으로각 영역을 균등하게 떨어트리고 싶은데, wrapper__header 부분이flex-direction: column으로 되어서 justify-content로는 안되는데,혹시 다른방법은 없나요?
-
해결됨나도코딩의 자바 기본편 - 풀코스 (20시간)
no usages 숨기기
이게 3,4 번째 문장위에 작게 나오는데 또 복사 붙여넣기 하면 저건 또 나오질 않아서요.. 실행은 제대로 되던데 숨기기 해도 문제 없을까요??-캡쳐화면package chap_01; public class _01_HelloWorld { public static void main(String[] args) { System.out.println("Hello World!!!"); } }-복붙했을 경우
-
미해결데브옵스(DevOps)를 위한 쿠버네티스 마스터
kubeadm init 후 반복적 재시작 오류
안녕하세요 컴퓨팅 자원 부족으로 aws응 사용해서 kubeadm init 을 시도하여 성공은 했으나 kube-system의 컨트롤플래인 파드들이 지속적으로 재시작되는 에러가 발생하였습니다.에러해결에 조그마한 실마리가되는 내용이라도 주시면 감사하겠습니다!!! ㅠㅠ지금까지의 로그로 추정되는 부분은 kube-apiserver이 재시작되면서 -> 다른 모든 컨트롤 플래인들도 통신 불가로 재시작하는것으로 보입니다.환경os: ubuntu server 22.04 LTS (HVM), SSD Volume Typecpu 2core / memory 4gdisk: root- 10g / 별도마운트 - 20g(docker, k8s사용 공간)도커 버전docker-ce=5:20.10.20~3-0~ubuntu-jammy \docker-ce-cli=5:20.10.20~3-0~ubuntu-jammy \containerd.io=1.6.8-1 \docker-compose-plugin=2.12.0~ubuntu-jammy쿠버네티스 버전kubelet=1.26.0-00 \kubeadm=1.26.0-00 \kubelet=1.26.0-00 docker와 k8s가 사용할 디스크공간인 /container 디렉토리에 20g디스크 별도로 마운트 이미 진행함 /etc/docker/daemon.json 변경{ "data-root": "/container/docker","exec-opts": ["native.cgroupdriver=systemd"] }kubeadm init중 cri관련 오류 발생하여 검색하여 아래내용 주석처리로 해결vi /etc/containerd/config.toml 수정# disabled_plugins = ["cri"]방화벽 해제sudo ufw disableiptable설정아래 링크대로 진행https://kubernetes.io/ko/docs/setup/production-environment/container-runtimes/#ipv4%EB%A5%BC-%ED%8F%AC%EC%9B%8C%EB%94%A9%ED%95%98%EC%97%AC-iptables%EA%B0%80-%EB%B8%8C%EB%A6%AC%EC%A7%80%EB%90%9C-%ED%8A%B8%EB%9E%98%ED%94%BD%EC%9D%84-%EB%B3%B4%EA%B2%8C-%ED%95%98%EA%B8%B0아래링크 참고하여 kubernetes가 사용할 디스크공간 변경https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/#the-kubelet-drop-in-file-for-systemdvi /etc/default/kubelet 에 아래내용 추가KUBELET_EXTRA_ARGS=--root-dir="/container/k8s"kubeadm initkubeadm init --skip-phases=addon/kube-proxykubeadm init으로 진행하였으나 계속된 reset후 다시 init 과정에서 kube-proxy 실패로 아래 에러로그는 kube-proxy단계를 스킵하는 명령어로 진행함 에러로그 첨부가 불가능 하여 아래 남깁니다!ubuntu@ip-10-0-15-82:~$ kubectl get nodeNAME STATUS ROLES AGE VERSIONmaster0 NotReady control-plane 7m10s v1.26.0ubuntu@ip-10-0-15-82:~$ kubectl describe node master0Name: master0Roles: control-planeLabels: beta.kubernetes.io/arch=amd64beta.kubernetes.io/os=linuxkubernetes.io/arch=amd64kubernetes.io/hostname=master0kubernetes.io/os=linuxnode-role.kubernetes.io/control-plane=node.kubernetes.io/exclude-from-external-load-balancers=Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.socknode.alpha.kubernetes.io/ttl: 0volumes.kubernetes.io/controller-managed-attach-detach: trueCreationTimestamp: Mon, 19 Dec 2022 06:03:24 +0000Taints: node-role.kubernetes.io/control-plane:NoSchedulenode.kubernetes.io/not-ready:NoScheduleUnschedulable: falseLease:HolderIdentity: master0AcquireTime: <unset>RenewTime: Mon, 19 Dec 2022 06:13:57 +0000Conditions:Type Status LastHeartbeatTime LastTransitionTime Reason Message---- ------ ----------------- ------------------ ------ -------MemoryPressure False Mon, 19 Dec 2022 06:13:52 +0000 Mon, 19 Dec 2022 06:03:21 +0000 KubeletHasSufficientMemory kubelet has sufficient memory availableDiskPressure False Mon, 19 Dec 2022 06:13:52 +0000 Mon, 19 Dec 2022 06:03:21 +0000 KubeletHasNoDiskPressure kubelet has no disk pressurePIDPressure False Mon, 19 Dec 2022 06:13:52 +0000 Mon, 19 Dec 2022 06:03:21 +0000 KubeletHasSufficientPID kubelet has sufficient PID availableReady False Mon, 19 Dec 2022 06:13:52 +0000 Mon, 19 Dec 2022 06:03:21 +0000 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initializedAddresses:InternalIP: 10.0.15.82Hostname: master0Capacity:cpu: 2ephemeral-storage: 20470Mihugepages-2Mi: 0memory: 4015088Kipods: 110Allocatable:cpu: 2ephemeral-storage: 19317915617hugepages-2Mi: 0memory: 3912688Kipods: 110System Info:Machine ID: f8b760a7c2274e0cb62621465dbcab92System UUID: ec21d23a-a384-2b77-91df-2f108bd6b565Boot ID: 12f267e0-d0f3-4193-b84a-d7dbcfd74b2bKernel Version: 5.15.0-1026-awsOS Image: Ubuntu 22.04.1 LTSOperating System: linuxArchitecture: amd64Container Runtime Version: containerd://1.6.8Kubelet Version: v1.26.0Kube-Proxy Version: v1.26.0Non-terminated Pods: (4 in total)Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age--------- ---- ------------ ---------- --------------- ------------- ---kube-system etcd-master0 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 9m12skube-system kube-apiserver-master0 250m (12%) 0 (0%) 0 (0%) 0 (0%) 10mkube-system kube-controller-manager-master0 200m (10%) 0 (0%) 0 (0%) 0 (0%) 9m7skube-system kube-scheduler-master0 100m (5%) 0 (0%) 0 (0%) 0 (0%) 9m16sAllocated resources:(Total limits may be over 100 percent, i.e., overcommitted.)Resource Requests Limits-------- -------- ------cpu 650m (32%) 0 (0%)memory 100Mi (2%) 0 (0%)ephemeral-storage 0 (0%) 0 (0%)hugepages-2Mi 0 (0%) 0 (0%)Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Starting 10m kubelet Starting kubelet.Warning InvalidDiskCapacity 10m kubelet invalid capacity 0 on image filesystemNormal NodeHasSufficientMemory 10m kubelet Node master0 status is now: NodeHasSufficientMemoryNormal NodeHasNoDiskPressure 10m kubelet Node master0 status is now: NodeHasNoDiskPressureNormal NodeHasSufficientPID 10m kubelet Node master0 status is now: NodeHasSufficientPIDNormal NodeAllocatableEnforced 10m kubelet Updated Node Allocatable limit across podsNormal RegisteredNode 9m37s node-controller Node master0 event: Registered Node master0 in ControllerNormal RegisteredNode 7m10s node-controller Node master0 event: Registered Node master0 in ControllerNormal RegisteredNode 4m57s node-controller Node master0 event: Registered Node master0 in ControllerNormal RegisteredNode 3m11s node-controller Node master0 event: Registered Node master0 in ControllerNormal RegisteredNode 25s node-controller Node master0 event: Registered Node master0 in Controller ubuntu@ip-10-0-15-82:~$ kubectl get po -ANAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-787d4945fb-bkhkm 0/1 Pending 0 6m20skube-system coredns-787d4945fb-d4t28 0/1 Pending 0 6m20skube-system etcd-master0 1/1 Running 20 (78s ago) 5m56skube-system kube-apiserver-master0 1/1 Running 21 (2m22s ago) 7m19skube-system kube-controller-manager-master0 0/1 Running 25 (66s ago) 5m51skube-system kube-scheduler-master0 0/1 CrashLoopBackOff 25 (62s ago) 6mubuntu@ip-10-0-15-82:~$ kubectl logs -f kube-apiserver-master0 -n kube-systemI1219 06:08:44.052941 1 server.go:555] external host was not specified, using 10.0.15.82I1219 06:08:44.053880 1 server.go:163] Version: v1.26.0I1219 06:08:44.053954 1 server.go:165] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""I1219 06:08:44.561040 1 shared_informer.go:273] Waiting for caches to sync for node_authorizerI1219 06:08:44.562267 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.I1219 06:08:44.562350 1 plugins.go:161] Loaded 12 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.W1219 06:08:44.613792 1 genericapiserver.go:660] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.I1219 06:08:44.615115 1 instance.go:277] Using reconciler: leaseI1219 06:08:44.882566 1 instance.go:621] API group "internal.apiserver.k8s.io" is not enabled, skipping.I1219 06:08:45.267941 1 instance.go:621] API group "resource.k8s.io" is not enabled, skipping.W1219 06:08:45.370729 1 genericapiserver.go:660] Skipping API authentication.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.370756 1 genericapiserver.go:660] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.372993 1 genericapiserver.go:660] Skipping API authorization.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.377856 1 genericapiserver.go:660] Skipping API autoscaling/v2beta1 because it has no resources.W1219 06:08:45.377876 1 genericapiserver.go:660] Skipping API autoscaling/v2beta2 because it has no resources.W1219 06:08:45.381127 1 genericapiserver.go:660] Skipping API batch/v1beta1 because it has no resources.W1219 06:08:45.383665 1 genericapiserver.go:660] Skipping API certificates.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.385890 1 genericapiserver.go:660] Skipping API coordination.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.385952 1 genericapiserver.go:660] Skipping API discovery.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.391568 1 genericapiserver.go:660] Skipping API networking.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.391585 1 genericapiserver.go:660] Skipping API networking.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.393562 1 genericapiserver.go:660] Skipping API node.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.393581 1 genericapiserver.go:660] Skipping API node.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.393641 1 genericapiserver.go:660] Skipping API policy/v1beta1 because it has no resources.W1219 06:08:45.399482 1 genericapiserver.go:660] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.399502 1 genericapiserver.go:660] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.401515 1 genericapiserver.go:660] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.401537 1 genericapiserver.go:660] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.407674 1 genericapiserver.go:660] Skipping API storage.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.413355 1 genericapiserver.go:660] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.413374 1 genericapiserver.go:660] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.419343 1 genericapiserver.go:660] Skipping API apps/v1beta2 because it has no resources.W1219 06:08:45.419362 1 genericapiserver.go:660] Skipping API apps/v1beta1 because it has no resources.W1219 06:08:45.421932 1 genericapiserver.go:660] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.421951 1 genericapiserver.go:660] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.W1219 06:08:45.424241 1 genericapiserver.go:660] Skipping API events.k8s.io/v1beta1 because it has no resources.W1219 06:08:45.479788 1 genericapiserver.go:660] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.I1219 06:08:46.357006 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"I1219 06:08:46.357217 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"I1219 06:08:46.357675 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key"I1219 06:08:46.358125 1 secure_serving.go:210] Serving securely on [::]:6443I1219 06:08:46.358242 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"I1219 06:08:46.363285 1 gc_controller.go:78] Starting apiserver lease garbage collectorI1219 06:08:46.363570 1 controller.go:80] Starting OpenAPI V3 AggregationControllerI1219 06:08:46.363829 1 controller.go:121] Starting legacy_token_tracking_controllerI1219 06:08:46.363850 1 shared_informer.go:273] Waiting for caches to sync for configmapsI1219 06:08:46.363877 1 apf_controller.go:361] Starting API Priority and Fairness config controllerI1219 06:08:46.363922 1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/etc/kubernetes/pki/front-proxy-client.crt::/etc/kubernetes/pki/front-proxy-client.key"I1219 06:08:46.364009 1 available_controller.go:494] Starting AvailableConditionControllerI1219 06:08:46.364019 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controllerI1219 06:08:46.358328 1 autoregister_controller.go:141] Starting autoregister controllerI1219 06:08:46.364040 1 cache.go:32] Waiting for caches to sync for autoregister controllerI1219 06:08:46.366773 1 controller.go:83] Starting OpenAPI AggregationControllerI1219 06:08:46.367148 1 customresource_discovery_controller.go:288] Starting DiscoveryControllerI1219 06:08:46.367616 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controllerI1219 06:08:46.367725 1 shared_informer.go:273] Waiting for caches to sync for cluster_authentication_trust_controllerI1219 06:08:46.367881 1 apiservice_controller.go:97] Starting APIServiceRegistrationControllerI1219 06:08:46.367970 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controllerI1219 06:08:46.368112 1 crdregistration_controller.go:111] Starting crd-autoregister controllerI1219 06:08:46.368191 1 shared_informer.go:273] Waiting for caches to sync for crd-autoregisterI1219 06:08:46.383719 1 controller.go:85] Starting OpenAPI controllerI1219 06:08:46.383786 1 controller.go:85] Starting OpenAPI V3 controllerI1219 06:08:46.383812 1 naming_controller.go:291] Starting NamingConditionControllerI1219 06:08:46.383830 1 establishing_controller.go:76] Starting EstablishingControllerI1219 06:08:46.383852 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionControllerI1219 06:08:46.383871 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionControllerI1219 06:08:46.383893 1 crd_finalizer.go:266] Starting CRDFinalizerI1219 06:08:46.383978 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"I1219 06:08:46.384084 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"I1219 06:08:46.463884 1 shared_informer.go:280] Caches are synced for configmapsI1219 06:08:46.463927 1 apf_controller.go:366] Running API Priority and Fairness config workerI1219 06:08:46.463935 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing processI1219 06:08:46.464063 1 cache.go:39] Caches are synced for autoregister controllerI1219 06:08:46.465684 1 cache.go:39] Caches are synced for AvailableConditionController controllerI1219 06:08:46.469795 1 shared_informer.go:280] Caches are synced for crd-autoregisterI1219 06:08:46.470150 1 shared_informer.go:280] Caches are synced for node_authorizerI1219 06:08:46.470302 1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controllerI1219 06:08:46.470438 1 cache.go:39] Caches are synced for APIServiceRegistrationController controllerI1219 06:08:46.479224 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.ioI1219 06:08:47.060404 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).I1219 06:08:47.370998 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.W1219 06:09:28.894719 1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {"Addr": "127.0.0.1:2379","ServerName": "127.0.0.1","Attributes": null,"BalancerAttributes": null,"Type": 0,"Metadata": null}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"W1219 06:09:28.895017 1 logging.go:59] [core] [Channel #13 SubChannel #14] grpc: addrConn.createTransport failed to connect to {"Addr": "127.0.0.1:2379","ServerName": "127.0.0.1","Attributes": null,"BalancerAttributes": null,"Type": 0,"Metadata": null}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"===중략===W1219 06:12:22.066087 1 logging.go:59] [core] [Channel #16 SubChannel #17] grpc: addrConn.createTransport failed to connect to {"Addr": "127.0.0.1:2379","ServerName": "127.0.0.1","Attributes": null,"BalancerAttributes": null,"Type": 0,"Metadata": null}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"{"level":"warn","ts":"2022-12-19T06:12:22.345Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:24.346Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:26.352Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:27.457Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00354fc00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}E1219 06:12:27.458799 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeoutE1219 06:12:27.458820 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeoutE1219 06:12:27.458843 1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 6.269µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>E1219 06:12:27.460034 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeoutI1219 06:12:27.461932 1 trace.go:219] Trace[630402872]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:9448a7a5-4c6b-490f-9aff-cd8384091228,client:10.0.15.82,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master0,user-agent:kubelet/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:PUT (19-Dec-2022 06:12:17.458) (total time: 10003ms):Trace[630402872]: ["GuaranteedUpdate etcd3" audit-id:9448a7a5-4c6b-490f-9aff-cd8384091228,key:/leases/kube-node-lease/master0,type:*coordination.Lease,resource:leases.coordination.k8s.io 10003ms (06:12:17.458)Trace[630402872]: ---"Txn call failed" err:context deadline exceeded 9998ms (06:12:27.458)]Trace[630402872]: [10.003519094s] [10.003519094s] ENDE1219 06:12:27.462368 1 timeout.go:142] post-timeout activity - time-elapsed: 3.532362ms, PUT "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master0" result: <nil>{"level":"warn","ts":"2022-12-19T06:12:28.352Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:30.242Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:30.359Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:32.365Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:34.366Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:34.905Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001c45180/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}E1219 06:12:34.905188 1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceededE1219 06:12:34.905331 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeoutE1219 06:12:34.906483 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeoutE1219 06:12:34.907611 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeoutI1219 06:12:34.909171 1 trace.go:219] Trace[1232755934]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:efcbbe67-217b-4534-8361-f0ca8603169e,client:10.0.15.82,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/etcd-master0,user-agent:kubelet/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:11:34.904) (total time: 60004ms):Trace[1232755934]: [1m0.004852843s] [1m0.004852843s] ENDE1219 06:12:34.909377 1 timeout.go:142] post-timeout activity - time-elapsed: 3.983518ms, GET "/api/v1/namespaces/kube-system/pods/etcd-master0" result: <nil>{"level":"warn","ts":"2022-12-19T06:12:36.372Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:37.458Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00354fc00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}E1219 06:12:37.459896 1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceededE1219 06:12:37.460058 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeoutE1219 06:12:37.461117 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeoutE1219 06:12:37.462667 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeoutI1219 06:12:37.464323 1 trace.go:219] Trace[688853594]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:c49906de-6377-43e9-86c6-8f053f5ea689,client:10.0.15.82,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master0,user-agent:kubelet/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:12:27.458) (total time: 10005ms):Trace[688853594]: [10.005594573s] [10.005594573s] ENDE1219 06:12:37.464689 1 timeout.go:142] post-timeout activity - time-elapsed: 5.065927ms, GET "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master0" result: <nil>{"level":"warn","ts":"2022-12-19T06:12:37.984Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001ba8000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}E1219 06:12:37.984376 1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceededE1219 06:12:37.984522 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeoutE1219 06:12:37.985741 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeoutI1219 06:12:37.987578 1 controller.go:615] quota admission added evaluator for: namespacesE1219 06:12:37.988356 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeoutI1219 06:12:37.990053 1 trace.go:219] Trace[931836157]: "Get" accept:application/vnd.kubernetes.protobuf, /,audit-id:90475625-91a7-4e3d-b74c-4c8971819dd4,client:::1,protocol:HTTP/2.0,resource:namespaces,scope:resource,url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:11:37.983) (total time: 60006ms):Trace[931836157]: [1m0.006350485s] [1m0.006350485s] ENDE1219 06:12:37.990484 1 timeout.go:142] post-timeout activity - time-elapsed: 4.870058ms, GET "/api/v1/namespaces/default" result: <nil>{"level":"warn","ts":"2022-12-19T06:12:38.373Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}I1219 06:12:39.988922 1 trace.go:219] Trace[448655361]: "List" accept:application/vnd.kubernetes.protobuf, /,audit-id:8b16c9b2-4f85-4e5d-918a-2d28acd753bb,client:::1,protocol:HTTP/2.0,resource:services,scope:cluster,url:/api/v1/services,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:LIST (19-Dec-2022 06:12:37.659) (total time: 2329ms):Trace[448655361]: ["List(recursive=true) etcd3" audit-id:8b16c9b2-4f85-4e5d-918a-2d28acd753bb,key:/services/specs,resourceVersion:,resourceVersionMatch:,limit:0,continue: 2329ms (06:12:37.659)]Trace[448655361]: [2.329166967s] [2.329166967s] END{"level":"warn","ts":"2022-12-19T06:12:40.242Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}I1219 06:12:40.249474 1 trace.go:219] Trace[239754632]: "List" accept:application/vnd.kubernetes.protobuf, /,audit-id:30b9e937-c36a-4398-9054-4a1cb1bd5edf,client:::1,protocol:HTTP/2.0,resource:resourcequotas,scope:namespace,url:/api/v1/namespaces/default/resourcequotas,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:LIST (19-Dec-2022 06:12:37.988) (total time: 2261ms):Trace[239754632]: ["List(recursive=true) etcd3" audit-id:30b9e937-c36a-4398-9054-4a1cb1bd5edf,key:/resourcequotas/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: 2261ms (06:12:37.988)]Trace[239754632]: [2.261402138s] [2.261402138s] END{"level":"warn","ts":"2022-12-19T06:12:40.380Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b41c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:42.386Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}I1219 06:12:42.442272 1 trace.go:219] Trace[1256675541]: "Get" accept:application/vnd.kubernetes.protobuf, /,audit-id:0176b32b-9911-4efd-a652-a65e9b8e5358,client:::1,protocol:HTTP/2.0,resource:namespaces,scope:resource,url:/api/v1/namespaces/kube-system,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:11:57.961) (total time: 44480ms):Trace[1256675541]: ---"About to write a response" 44480ms (06:12:42.442)Trace[1256675541]: [44.480780934s] [44.480780934s] ENDI1219 06:12:42.446847 1 trace.go:219] Trace[1993246150]: "Create" accept:application/vnd.kubernetes.protobuf, /,audit-id:037244a1-0427-4f7b-a27f-a28053080851,client:::1,protocol:HTTP/2.0,resource:namespaces,scope:resource,url:/api/v1/namespaces,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:POST (19-Dec-2022 06:12:37.987) (total time: 4459ms):Trace[1993246150]: ["Create etcd3" audit-id:037244a1-0427-4f7b-a27f-a28053080851,key:/namespaces/default,type:*core.Namespace,resource:namespaces 2195ms (06:12:40.251)Trace[1993246150]: ---"Txn call succeeded" 2194ms (06:12:42.445)]Trace[1993246150]: [4.459769012s] [4.459769012s] ENDI1219 06:12:42.674053 1 trace.go:219] Trace[1794029875]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:e430c809-7f0f-466e-b055-2b6b9141ff8c,client:10.0.15.82,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-controller-manager-master0,user-agent:kubelet/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:12:34.909) (total time: 7765ms):Trace[1794029875]: ---"About to write a response" 7764ms (06:12:42.673)Trace[1794029875]: [7.765007745s] [7.765007745s] END{"level":"warn","ts":"2022-12-19T06:12:44.393Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0047b4000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}{"level":"warn","ts":"2022-12-19T06:12:44.971Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00354fc00/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}I1219 06:12:44.971449 1 trace.go:219] Trace[994491080]: "Update" accept:application/vnd.kubernetes.protobuf, /,audit-id:409a08f2-ec78-4882-9bfa-9ce30a084b98,client:::1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-apiserver-sbw72mnicesx7ail7r675e52gy,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:PUT (19-Dec-2022 06:12:10.970) (total time: 34001ms):Trace[994491080]: ["GuaranteedUpdate etcd3" audit-id:409a08f2-ec78-4882-9bfa-9ce30a084b98,key:/leases/kube-system/kube-apiserver-sbw72mnicesx7ail7r675e52gy,type:*coordination.Lease,resource:leases.coordination.k8s.io 34000ms (06:12:10.970)Trace[994491080]: ---"Txn call failed" err:context deadline exceeded 34000ms (06:12:44.971)]Trace[994491080]: [34.001140432s] [34.001140432s] ENDE1219 06:12:44.971767 1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 10.899µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>E1219 06:12:44.972574 1 controller.go:189] failed to update lease, error: Timeout: request did not complete within requested timeout - context deadline exceededI1219 06:12:46.648431 1 trace.go:219] Trace[1569528607]: "Update" accept:application/vnd.kubernetes.protobuf, /,audit-id:66c304d4-07a7-4651-a080-b0a6fe1514d1,client:::1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-apiserver-sbw72mnicesx7ail7r675e52gy,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:PUT (19-Dec-2022 06:12:44.973) (total time: 1675ms):Trace[1569528607]: ["GuaranteedUpdate etcd3" audit-id:66c304d4-07a7-4651-a080-b0a6fe1514d1,key:/leases/kube-system/kube-apiserver-sbw72mnicesx7ail7r675e52gy,type:*coordination.Lease,resource:leases.coordination.k8s.io 1675ms (06:12:44.973)Trace[1569528607]: ---"Txn call completed" 1674ms (06:12:46.648)]Trace[1569528607]: [1.675226852s] [1.675226852s] ENDI1219 06:12:46.649989 1 trace.go:219] Trace[424403]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:7ce2d09e-ef67-46b0-9359-d7bb18552cd1,client:10.0.15.82,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master0,user-agent:kubelet/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:12:37.660) (total time: 8989ms):Trace[424403]: ---"About to write a response" 8989ms (06:12:46.649)Trace[424403]: [8.989433007s] [8.989433007s] ENDI1219 06:12:49.083394 1 trace.go:219] Trace[50133606]: "Get" accept:application/vnd.kubernetes.protobuf, /,audit-id:790109d6-02cb-46d3-b31f-b1823eea9276,client:::1,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.26.0 (linux/amd64) kubernetes/b46a3f8,verb:GET (19-Dec-2022 06:12:42.453) (total time: 6630ms):Trace[50133606]: ---"About to write a response" 6630ms (06:12:49.083)Trace[50133606]: [6.630185906s] [6.630185906s] END
-
해결됨[리뉴얼] React로 NodeBird SNS 만들기
안녕하세요.
안녕하세요 강의와 연관은 없지만 어디에 질문할지 몰라 여기에 질문 남깁니다. 제가 기존 노드 교과서를 이북으로 구매하여 잘 보고 있습니다. 3판이 나오면 이북도 새로 나오는 건가요? 맞다면 이북 같은 경우 출시 예정일이 언제인지도 궁금합니다. 감사합니다!!
-
미해결프로젝트로 배우는 React.js
검색에서 title_like 중에
- 학습 관련 질문을 남겨주세요. 상세히 작성하면 더 좋아요! - 먼저 유사한 질문이 있었는지 검색해보세요. - 서로 예의를 지키며 존중하는 문화를 만들어가요. - 잠깐! 인프런 서비스 운영 관련 문의는 1:1 문의하기를 이용해주세요. 검색에서 title_like 중에 같은 부분만 색상을 바꿀수있나요~?!nflinflearn <- 이런식으로 결과값에 같은 부분을요 지금은 볼드지만 색상으로... 가능할까요?
-
미해결쉽게 시작하는 쿠버네티스(v1.30) - {{ x86-64, arm64 }}
vagrant up
[질문 전 답변]1. 강의에서 다룬 내용과 관련된 질문인가요? [예 | 아니요]2. 인프런의 질문 게시판과 자주 하는 질문에 없는 내용인가요? [예 | 아니요]3. 질문 잘하기 법을 읽어보셨나요? [예 | 아니요](https://www.inflearn.com/blogs/1719)4. 잠깐! 인프런 서비스 운영 관련 문의는 1:1 문의하기를 이용해주세요.[질문 하기]vagrant up 시에 계속 ssh time out 이 발생하는데요time out 이후에 다시 설치하여 전부 완료가 된 후k8s 하나 들어가서kubectl get nodes명령어를 치면command not found가 발생합니다. 설치가 제대로 안된건가요?회사pc, 회사 mac, 개인 pc 모든 방법을 사용해도 설치가 안됩니다. 이런경우 강의를 못듣게되는데요
-
미해결[코드팩토리] [초급] Flutter 3.0 앱 개발 - 10개의 프로젝트로 오늘 초보 탈출!
run 콘솔 창 질문
안녕하세요 선생님안드로이드 스튜디오 사용하여 강의 수강중인데 run 콘솔창이위와 같이 너무 지저분하게 나와서요,, 깔끔하게 보고싶어서 설정을 하려는데 방법이 잘 안나오는 것 같아서 질문드립니다. 필요로 하는 것 내용들만 필터링 해서 볼 수 있을까요?
-
미해결스프링 입문 - 코드로 배우는 스프링 부트, 웹 MVC, DB 접근 기술
hello-mvc 오류
학습하는 분들께 도움이 되고, 더 좋은 답변을 드릴 수 있도록 질문전에 다음을 꼭 확인해주세요.1. 강의 내용과 관련된 질문을 남겨주세요.2. 인프런의 질문 게시판과 자주 하는 질문(링크)을 먼저 확인해주세요.(자주 하는 질문 링크: https://bit.ly/3fX6ygx)3. 질문 잘하기 메뉴얼(링크)을 먼저 읽어주세요.(질문 잘하기 메뉴얼 링크: https://bit.ly/2UfeqCG)질문 시에는 위 내용은 삭제하고 다음 내용을 남겨주세요.=========================================[질문 템플릿]1. 강의 내용과 관련된 질문인가요? (예/아니오)2. 인프런의 질문 게시판과 자주 하는 질문에 없는 내용인가요? (예/아니오)3. 질문 잘하기 메뉴얼을 읽어보셨나요? (예/아니오)[질문 내용]오류html과 controller return값을 똑같이 적었는데 오류가 뜹니다.
-
해결됨스프링 MVC 1편 - 백엔드 웹 개발 핵심 기술
Port 8080 was already in use 해결에 도움을 요청드려요
Description: Web server failed to start. Port 8080 was already in use.~(1) 정적 HTML페이지를 생성 저장 후, 실행시키니 위와 같은 에러가 나타났습니다. 그래서 터미널에서 sudo kill 을 실행시켰더니 인텔리제이마저 종료되어 버렸습니다.(2) 다시 인텔리제이를 실행시킨 후 application.yml 에서 server.port: 8081 을 추가하고 실행시키니 -> 프로젝트는 이상없이 실행이 되었는데, 정적 html 페이지의 주소가 8080으로 정상적으로 연결되지만 / 8081로 연결하니 '사이트에 연결할 수 없음'으로 나타납니다.여기서부터 이해가 되지 않아 구글링 및 자주하는 질문을 확인해도 답을 찾을 수 없어 문의를 남깁니다.(1) sudo kill -9에서 8080에서의 PID를 제거하니 인텔리제이마저 종료된 것이 무슨 의미일까요?(2) application.yml에 8081을 추가하지 않고도 프로젝트 실행되게 하려면 어떤 조치가 필요할까요?(3) 8081을 추가하고 정적 html은 8080으로 열리는 게 정상인 것일까요? 만약 그렇다면 그 이유에 대한 설명을 부탁드립니다.(참고로 이 현상이 발생되기 전에, 인텔리제이에서 직접 정적 html을 브라우저에 띄우면 -> 포트63342에서 띄워져서 -> 인텔리제이 preferences에서 Built-in Server Port가 63342 로 되어 있는 것을 8080으로 수정하였습니다.)
-
미해결Vue.js 중급 강좌 - 웹앱 제작으로 배워보는 Vue.js, ES6, Vuex
깃헙 권한 요청드립니다~
인프런 아이디: chillycorn인프런 이메일: chillycorn@g.skku.edu깃헙 아이디: happycrab@naver.com깃헙 Username : jjanghee
-
미해결내 업무를 대신 할 파이썬(Python) 웹크롤링 & 자동화 (feat. 주식, 부동산 데이터 / 인스타그램)
환불 가능할까요?
안녕하세요 강사님 강의 결제 후 많은 기간이 지났는데 수강을 안하고 있어서 그런데환불기준에는 부합하지 못하지만 혹시 환불이 가능할까요?
-
미해결
외부 프로그램에서 생선된 값을 웹사이트 쿠키에 저장하고 싶습니다.
안녕하세요 혼자 끙끙거리다가 너무 안풀려서 질문 올려봅니다. ㅠㅠ현재 개발은 아래 사진과 같이 진행하였습니다.Front : Vue.js, Back : Node.js, 사용자가 사용하는 외부 프로그램 : Python과정 )사용자가 웹사이트에서 외부 프로그램 다운로드외부프로그램 실행 후 생성된 값을 웹사이트 쿠키에 저장해당 쿠키를 기반으로 웹사이트 표출 제가 알고싶은건 해당 절차가 가능한지 가능하다면 어떤 방법이 있는지 궁금합니다. 해결이 되지 않아서 3일 내내 고민중입니다.ㅠㅠㅠ고수님들의 도움 부탁드립니다....
-
미해결실전! 웹사이트제작! Step by Step! ('크루알라모드'_반응형웹 제작)
sectors 에서 이미지 부분 질문드립니다.
질문드립니다. sector 부분에서 아래 이미지 보더레디어스를 줄때 위에만 먹고 밑에는 안먹는것이 확인되어집니다..그래서 호버해서 스케일이 커지면 아래에 보더레디어스까지 먹는데 호버에서 나오면 아래 보더레디어스는 먹히지않네요. 이유가 어떤건지 그리고 해결방법이 어떤게있을지 문의드립니다.height:100%를 줘도 안되서요..
-
미해결코로나맵 개발자와 함께하는 지도서비스 만들기 1
var로 선언하는 이유
안녕하세요! 강의 수강 중에 변수선언 키워드로 var를 사용하는 이유를 알고싶어서 질문 남기게 되었습니다.특별한 이유가 없다면 var 대신 let이나 const로 바꾸어서 사용하고싶은데, var키워드를 사용하는 특별한 이유가 있을까요?~
-
미해결자바 ORM 표준 JPA 프로그래밍 - 기본편
1차 캐시 용량 및 INSERT INTO 관련 질문입니다.
현재 API 통신을 통해서 데이터를 전달을 받고 있습니다.대략 갯수는 7만개 정도 됩니다.하지만 INSERT 쿼리는 21222개만 JPA에서 날라가고 있어서 DB에는 21222개만 적재가 되고 있습니다. 개인적인 추측으로는 1차 캐시에 저장할 수 있는 최대 용량이 21222개라서 라고 생각하고 있습니다.이러한 경우 1차 캐시의 용량을 설정하려면 어디서 해야할 지 질문드립니다!YML에서 설정을 잡아주면 될까요?이 외에도 다른 가능성이 있을지 궁금합니다.
-
미해결AWS Certified Advanced Networking - Specialty 자격증 준비하기
라우팅 테이블 순서 관련
안녕하세요-! AWS Route Leaning 강의 관련해서 궁금하여 문의 드립니다.제가 강의를 바탕으로 아래와 같이 이해 했는데 맞을까요?Direct Conect가 BGP VPN 보다 라우팅이 우선 되니까.On-premise에 Direct Connect/BGP VPN 둘다 설정 해놓으면 기본적으로 Direct Conect를 통해서 패킷이 흘러가고, Direct Connect에 문제가 생겼을 경우엔, 자동으로 BGP VPN으로 패킷이 흘러간다.별도 설정 없이 그냥 자동으로 AWS 라우터?에서 위와 같이 Direct Connect 문제 생겼을때, BGP VPN으로 틀어주는 건가요? 라우터에서 Direct Connect가 문제 생겼다고 알려주진 않을것 같은데.. Direct Connect 백업용으로 BGP VPN을 어떻게 이용하는건지 궁금합니다-! 감사합니다-!
-
미해결10주완성 C++ 코딩테스트 | 알고리즘 코딩테스트
1-O 질문있습니다
만약에 n이 3이라고 가정하면 처음에 while문에 들어가고 cnt가 else문에 의해 처음에 11이 되고 cnt %= n에 의해서 cnt = 2가 되는 것 아닌가요..? 그러면 그다음 cnt는 cnt = (2 * 10) + 1 이 돼서 21이 된다고 생각하는데...이해가 잘 되지 않습니다
-
미해결스프링 핵심 원리 - 기본편
DL (의존관계 조회)와 DI (의존관계 주입)
학습하는 분들께 도움이 되고, 더 좋은 답변을 드릴 수 있도록 질문전에 다음을 꼭 확인해주세요.1. 강의 내용과 관련된 질문을 남겨주세요.2. 인프런의 질문 게시판과 자주 하는 질문(링크)을 먼저 확인해주세요.(자주 하는 질문 링크: https://bit.ly/3fX6ygx)3. 질문 잘하기 메뉴얼(링크)을 먼저 읽어주세요.(질문 잘하기 메뉴얼 링크: https://bit.ly/2UfeqCG)질문 시에는 위 내용은 삭제하고 다음 내용을 남겨주세요.=========================================[질문 템플릿]1. 강의 내용과 관련된 질문인가요? (예/아니오)2. 인프런의 질문 게시판과 자주 하는 질문에 없는 내용인가요? (예/아니오)3. 질문 잘하기 메뉴얼을 읽어보셨나요? (예/아니오)[질문 내용]다른 질문을 참고하면서 Provider를 통해서 싱글톤 빈을 DL 하는 경우에는 싱글톤 빈이 생성되는 것이 아니라 조회된다.프로토타입은 조회할 때마다 새로 생성되는 것으로 DL은 컨테이너를 통해서 빈을 찾아온다고 생각하면 된다.를 보고 싱글톤 빈을 DL 하는 경우에 대하여 의문이 생겼는데 이 과정이 DI (의존관계 주입)과 별 차이가 없다고 느껴졌는데 (프로토타입은 새로 생성되니까 차이가 있다고 생각했습니다.) 싱글톤 빈의 경우에도 의존관계 주입을 할 때 이름이 같은 빈을 찾아서 주입한다고 배워서 조회하는 것과 차이가 없다고 생각이 들었습니다. 둘의 차이가 있는 것인가요??
-
해결됨[마스터] 기본 튼튼 프리미어프로 정복하기 강좌
수강 자료 요청드립니다.
안녕하세요~강의를 구매했습니다.수강자료 받고싶어서 문의 남깁니다.cardpos@naver.com으로 빠르게 요청드립니다.