묻고 답해요
158만명의 커뮤니티!! 함께 토론해봐요.
인프런 TOP Writers
-
미해결실전! 스프링 부트와 JPA 활용2 - API 개발과 성능 최적화
OSIV 질문드려요~
질문을 찾아봤는데 없는것 같아서 질문 드립니다! 혹시 특정 Service만 OSIV를 끄고 커넥션을 유지하는 방법은 없을까요? 이전 강의에서 hibernate.default_batch_fetch_size는 전체 사이즈를 정하고 @BatchSize처럼 개별적으로 사이즈를 정하듯이 OSIV는 개별적으로 on/off는 불가능한지 궁금합니다.
-
미해결실전! 스프링 부트와 JPA 활용1 - 웹 애플리케이션 개발
UpdateItemDTO
23분쯤에 UpdateItemDTO를 service계층에서 만들고 controller로부터 String name, int price... 이렇게 따로 받지말고 dto로 한번에 받는게 좋은 설계다 라고 말씀 해주셨는데 그러면 Controller에서 service 계층의 UpdateItemDTO를 알고 있어야 DTO를 service계층으로 보내줄텐데 이렇게 되면 UpdateItemDTO를 service계층에 만드는것은 문제가 따로 없는건가요? 서로 다른 계층에 대해서는 서로 몰라야 한다고 생각했는데 헷갈려서 질문 드립니다.
-
미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
Source Connector 오류
- 학습 관련 질문을 남겨주세요. 상세히 작성하면 더 좋아요! - 먼저 유사한 질문이 있었는지 검색해보세요. - 서로 예의를 지키며 존중하는 문화를 만들어가요. - 잠깐! 인프런 서비스 운영 관련 문의는 1:1 문의하기를 이용해주세요. Postman 으로 데이터값 { "name": "my-source-connect", "config": { "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector", "connection.url": "jdbc:mariadb://localhost:3306/mydb", "connection.user": "root", "connection.password": "1234", "mode": "incrementing", "incrementing.column.name": "id", "table.whitelist": "users", "topic.prefix": "my_topic_", "tasks.max": "1" } } 을 Post 방식으로 보냈고 (h2 console 창에서 연결 확인했습니다 ) 다시 Get 방식으로 보냈을때 my-source-connect 확인했습니다. 그 후 connect 콘솔 창에 뜬 로그 입니다. [2022-08-08 23:05:06,233] INFO JdbcSourceConnectorConfig values: batch.max.rows = 100 catalog.pattern = null connection.attempts = 3 connection.backoff.ms = 10000 connection.password = [hidden] connection.url = jdbc:mariadb://localhost:3306/mydb connection.user = root db.timezone = UTC dialect.name = incrementing.column.name = id mode = incrementing numeric.mapping = null numeric.precision.mapping = false poll.interval.ms = 5000 query = query.retry.attempts = -1 query.suffix = quote.sql.identifiers = ALWAYS schema.pattern = null table.blacklist = [] table.monitoring.startup.polling.limit.ms = 10000 table.poll.interval.ms = 60000 table.types = [TABLE] table.whitelist = [users] timestamp.column.name = [] timestamp.delay.interval.ms = 0 timestamp.granularity = connect_logical timestamp.initial = null topic.prefix = my_topic_ transaction.isolation.mode = DEFAULT validate.non.null = true (io.confluent.connect.jdbc.source.JdbcSourceConnectorConfig:361) [2022-08-08 23:05:06,241] INFO AbstractConfig values: (org.apache.kafka.common.config.AbstractConfig:361) [2022-08-08 23:05:06,270] INFO [Worker clientId=connect-1, groupId=connect-cluster] Connector my-source-connect config updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1534) [2022-08-08 23:05:06,275] INFO [Worker clientId=connect-1, groupId=connect-cluster] Rebalance started (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:225) [2022-08-08 23:05:06,275] INFO [Worker clientId=connect-1, groupId=connect-cluster] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:540) [2022-08-08 23:05:06,280] INFO [Worker clientId=connect-1, groupId=connect-cluster] Successfully joined group with generation Generation{generationId=48, memberId='connect-1-74e3b810-49fe-4cd5-90cf-0c9408ba73ab', protocol='sessioned'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:596) [2022-08-08 23:05:06,297] INFO [Worker clientId=connect-1, groupId=connect-cluster] Successfully synced group in generation Generation{generationId=48, memberId='connect-1-74e3b810-49fe-4cd5-90cf-0c9408ba73ab', protocol='sessioned'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:756) [2022-08-08 23:05:06,298] INFO [Worker clientId=connect-1, groupId=connect-cluster] Joined group at generation 48 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-74e3b810-49fe-4cd5-90cf-0c9408ba73ab', leaderUrl='http://127.0.0.1:8083/', offset=44, connectorIds=[my-source-connect], taskIds=[], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1689) [2022-08-08 23:05:06,303] INFO [Worker clientId=connect-1, groupId=connect-cluster] Starting connectors and tasks using config offset 44 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1216) [2022-08-08 23:05:06,308] INFO [Worker clientId=connect-1, groupId=connect-cluster] Starting connector my-source-connect (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1299) [2022-08-08 23:05:06,313] INFO Creating connector my-source-connect of type io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:274) [2022-08-08 23:05:06,316] INFO SourceConnectorConfig values: config.action.reload = restart connector.class = io.confluent.connect.jdbc.JdbcSourceConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = my-source-connect predicates = [] tasks.max = 1 topic.creation.groups = [] transforms = [] value.converter = null (org.apache.kafka.connect.runtime.SourceConnectorConfig:361) [2022-08-08 23:05:06,317] INFO EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.confluent.connect.jdbc.JdbcSourceConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = my-source-connect predicates = [] tasks.max = 1 topic.creation.groups = [] transforms = [] value.converter = null (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:361) [2022-08-08 23:05:06,339] INFO Instantiated connector my-source-connect with version 10.5.1 of type class io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:284) [2022-08-08 23:05:06,339] INFO Finished creating connector my-source-connect (org.apache.kafka.connect.runtime.Worker:310) [2022-08-08 23:05:06,341] INFO Starting JDBC Source Connector (io.confluent.connect.jdbc.JdbcSourceConnector:71) [2022-08-08 23:05:06,344] INFO [Worker clientId=connect-1, groupId=connect-cluster] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1244) [2022-08-08 23:05:06,358] INFO JdbcSourceConnectorConfig values: batch.max.rows = 100 catalog.pattern = null connection.attempts = 3 connection.backoff.ms = 10000 connection.password = [hidden] connection.url = jdbc:mariadb://localhost:3306/mydb connection.user = root db.timezone = UTC dialect.name = incrementing.column.name = id mode = incrementing numeric.mapping = null numeric.precision.mapping = false poll.interval.ms = 5000 query = query.retry.attempts = -1 query.suffix = quote.sql.identifiers = ALWAYS schema.pattern = null table.blacklist = [] table.monitoring.startup.polling.limit.ms = 10000 table.poll.interval.ms = 60000 table.types = [TABLE] table.whitelist = [users] timestamp.column.name = [] timestamp.delay.interval.ms = 0 timestamp.granularity = connect_logical timestamp.initial = null topic.prefix = my_topic_ transaction.isolation.mode = DEFAULT validate.non.null = true (io.confluent.connect.jdbc.source.JdbcSourceConnectorConfig:361) [2022-08-08 23:05:06,369] INFO Attempting to open connection #1 to MySql (io.confluent.connect.jdbc.util.CachedConnectionProvider:79) [2022-08-08 23:05:06,446] INFO Starting thread to monitor tables. (io.confluent.connect.jdbc.source.TableMonitorThread:82) [2022-08-08 23:05:06,494] INFO SourceConnectorConfig values: config.action.reload = restart connector.class = io.confluent.connect.jdbc.JdbcSourceConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = my-source-connect predicates = [] tasks.max = 1 topic.creation.groups = [] transforms = [] value.converter = null (org.apache.kafka.connect.runtime.SourceConnectorConfig:361) [2022-08-08 23:05:06,498] INFO EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.confluent.connect.jdbc.JdbcSourceConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = my-source-connect predicates = [] tasks.max = 1 topic.creation.groups = [] transforms = [] value.converter = null (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:361) [2022-08-08 23:05:06,530] ERROR Encountered an unrecoverable error while reading tables from the database (io.confluent.connect.jdbc.source.TableMonitorThread:224) org.apache.kafka.connect.errors.ConnectException: The connector uses the unqualified table name as the topic name and has detected duplicate unqualified table names. This could lead to mixed data types in the topic and downstream processing errors. To prevent such processing errors, the JDBC Source connector fails to start when it detects duplicate table name configurations. Update the connector's 'table.whitelist' config to include exactly one table in each of the tables listed below. [["mydb"."users", "performance_schema"."users"]] at io.confluent.connect.jdbc.source.TableMonitorThread.tables(TableMonitorThread.java:152) at io.confluent.connect.jdbc.JdbcSourceConnector.taskConfigs(JdbcSourceConnector.java:164) at org.apache.kafka.connect.runtime.Worker.connectorTaskConfigs(Worker.java:359) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:1428) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:1366) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1000(DistributedHerder.java:128) at org.apache.kafka.connect.runtime.distributed.DistributedHerder$12.call(DistributedHerder.java:1318) at org.apache.kafka.connect.runtime.distributed.DistributedHerder$12.call(DistributedHerder.java:1312) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:371) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:295) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) [2022-08-08 23:05:06,538] ERROR WorkerConnector{id=my-source-connect} Connector raised an error (org.apache.kafka.connect.runtime.WorkerConnector:506) org.apache.kafka.connect.errors.ConnectException: Encountered an unrecoverable error while reading tables from the database at io.confluent.connect.jdbc.source.TableMonitorThread.fail(TableMonitorThread.java:226) at io.confluent.connect.jdbc.source.TableMonitorThread.tables(TableMonitorThread.java:153) at io.confluent.connect.jdbc.JdbcSourceConnector.taskConfigs(JdbcSourceConnector.java:164) at org.apache.kafka.connect.runtime.Worker.connectorTaskConfigs(Worker.java:359) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:1428) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:1366) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1000(DistributedHerder.java:128) at org.apache.kafka.connect.runtime.distributed.DistributedHerder$12.call(DistributedHerder.java:1318) at org.apache.kafka.connect.runtime.distributed.DistributedHerder$12.call(DistributedHerder.java:1312) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:371) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:295) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: org.apache.kafka.connect.errors.ConnectException: The connector uses the unqualified table name as the topic name and has detected duplicate unqualified table names. This could lead to mixed data types in the topic and downstream processing errors. To prevent such processing errors, the JDBC Source connector fails to start when it detects duplicate table name configurations. Update the connector's 'table.whitelist' config to include exactly one table in each of the tables listed below. [["mydb"."users", "performance_schema"."users"]] at io.confluent.connect.jdbc.source.TableMonitorThread.tables(TableMonitorThread.java:152) ... 14 more [2022-08-08 23:05:06,542] ERROR [Worker clientId=connect-1, groupId=connect-cluster] Failed to reconfigure connector's tasks, retrying after backoff: (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1377) org.apache.kafka.connect.errors.ConnectException: Encountered an unrecoverable error while reading tables from the database at io.confluent.connect.jdbc.source.TableMonitorThread.fail(TableMonitorThread.java:226) at io.confluent.connect.jdbc.source.TableMonitorThread.tables(TableMonitorThread.java:153) at io.confluent.connect.jdbc.JdbcSourceConnector.taskConfigs(JdbcSourceConnector.java:164) at org.apache.kafka.connect.runtime.Worker.connectorTaskConfigs(Worker.java:359) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:1428) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:1366) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1000(DistributedHerder.java:128) at org.apache.kafka.connect.runtime.distributed.DistributedHerder$12.call(DistributedHerder.java:1318) at org.apache.kafka.connect.runtime.distributed.DistributedHerder$12.call(DistributedHerder.java:1312) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:371) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:295) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: org.apache.kafka.connect.errors.ConnectException: The connector uses the unqualified table name as the topic name and has detected duplicate unqualified table names. This could lead to mixed data types in the topic and downstream processing errors. To prevent such processing errors, the JDBC Source connector fails to start when it detects duplicate table name configurations. Update the connector's 'table.whitelist' config to include exactly one table in each of the tables listed below. [["mydb"."users", "performance_schema"."users"]] at io.confluent.connect.jdbc.source.TableMonitorThread.tables(TableMonitorThread.java:152) ... 14 more [2022-08-08 23:05:06,546] INFO [Worker clientId=connect-1, groupId=connect-cluster] Skipping reconfiguration of connector my-source-connect since it is not running (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1415) [2022-08-08 23:05:06,795] INFO [Worker clientId=connect-1, groupId=connect-cluster] Skipping reconfiguration of connector my-source-connect since it is not running (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1415) 계속 구글링하고 찾아봤는데 뭐가 잘못되었는지 잘모르겠습니다 ㅠ
-
해결됨자바 ORM 표준 JPA 프로그래밍 - 기본편
foreign key 수정 질문
안녕하세요. 연관관계에 대해서 의문이 생겼습니다. 24분 쯤에 foreign key를 update 한다고 하셨는데 foreign key가 업데이트 되는 것이 실무적으로 발생해도 되는 경우인가요? pk를 참조하고 있기 때문에 개념상으로는 update 되어서는 안되는 데이터라고 생각했는데 상황적으로(팀을 바꾸는 상황)는 update 되는 것이 맞다고 여겨져서 제가 잘못 알고 있는 것인지 궁금해졌습니다. 감사합니다.
-
미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
같은 수강생분들께 도커로 사용한 방식 공유드리고 싶습니다!
version: '2'services: zookeeper: image: confluentinc/cp-zookeeper:7.2.1 hostname: zookeeper container_name: zookeeper ports: - "2181:2181" environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 broker: image: confluentinc/cp-server:7.2.1 hostname: broker container_name: broker depends_on: - zookeeper ports: - "9092:9092" - "9101:9101" environment: KAFKA_BROKER_ID: 1 KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181' KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092 KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0 KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1 KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1 KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1 KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1 KAFKA_JMX_PORT: 9101 KAFKA_JMX_HOSTNAME: localhost KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081 CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092 CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1 CONFLUENT_METRICS_ENABLE: 'true' CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous' schema-registry: image: confluentinc/cp-schema-registry:7.2.1 hostname: schema-registry container_name: schema-registry depends_on: - broker ports: - "8081:8081" environment: SCHEMA_REGISTRY_HOST_NAME: schema-registry SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker:29092' SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081 connect: image: cnfldemos/cp-server-connect-datagen:0.5.3-7.1.0 hostname: connect container_name: connect depends_on: - broker - schema-registry ports: - "8083:8083" volumes: - ./jdbc jar가 들어있는 로컬 디렉토리:/etc/kafka-connect/jars #jdbc 연결시 필요 environment: CONNECT_BOOTSTRAP_SERVERS: 'broker:29092' CONNECT_REST_ADVERTISED_HOST_NAME: connect CONNECT_GROUP_ID: compose-connect-group CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1 CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000 CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1 CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1 CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081 # CLASSPATH required due to CC-2422 CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-7.2.1.jar CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor" CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor" CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components" CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR control-center: image: confluentinc/cp-enterprise-control-center:7.2.1 hostname: control-center container_name: control-center depends_on: - broker - schema-registry - connect - ksqldb-server ports: - "9021:9021" environment: CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker:29092' CONTROL_CENTER_CONNECT_CONNECT-DEFAULT_CLUSTER: 'connect:8083' CONTROL_CENTER_KSQL_KSQLDB1_URL: "http://ksqldb-server:8088" CONTROL_CENTER_KSQL_KSQLDB1_ADVERTISED_URL: "http://localhost:8088" CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081" CONTROL_CENTER_REPLICATION_FACTOR: 1 CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1 CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1 CONFLUENT_METRICS_TOPIC_REPLICATION: 1 PORT: 9021 ksqldb-server: image: confluentinc/cp-ksqldb-server:7.2.1 hostname: ksqldb-server container_name: ksqldb-server depends_on: - broker - connect ports: - "8088:8088" environment: KSQL_CONFIG_DIR: "/etc/ksql" KSQL_BOOTSTRAP_SERVERS: "broker:29092" KSQL_HOST_NAME: ksqldb-server KSQL_LISTENERS: "http://0.0.0.0:8088" KSQL_CACHE_MAX_BYTES_BUFFERING: 0 KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081" KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor" KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor" KSQL_KSQL_CONNECT_URL: "http://connect:8083" KSQL_KSQL_LOGGING_PROCESSING_TOPIC_REPLICATION_FACTOR: 1 KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: 'true' KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: 'true' ksqldb-cli: image: confluentinc/cp-ksqldb-cli:7.2.1 container_name: ksqldb-cli depends_on: - broker - connect - ksqldb-server entrypoint: /bin/sh tty: true ksql-datagen: image: confluentinc/ksqldb-examples:7.2.1 hostname: ksql-datagen container_name: ksql-datagen depends_on: - ksqldb-server - broker - schema-registry - connect command: "bash -c 'echo Waiting for Kafka to be ready... && \ cub kafka-ready -b broker:29092 1 40 && \ echo Waiting for Confluent Schema Registry to be ready... && \ cub sr-ready schema-registry 8081 40 && \ echo Waiting a few seconds for topic creation to finish... && \ sleep 11 && \ tail -f /dev/null'" environment: KSQL_CONFIG_DIR: "/etc/ksql" STREAMS_BOOTSTRAP_SERVERS: broker:29092 STREAMS_SCHEMA_REGISTRY_HOST: schema-registry STREAMS_SCHEMA_REGISTRY_PORT: 8081 rest-proxy: image: confluentinc/cp-kafka-rest:7.2.1 depends_on: - broker - schema-registry ports: - 8082:8082 hostname: rest-proxy container_name: rest-proxy environment: KAFKA_REST_HOST_NAME: rest-proxy KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092' KAFKA_REST_LISTENERS: "http://0.0.0.0:8082" KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081' docker-compose.yml 파일입니다!
-
미해결호돌맨의 요절복통 개발쇼 (SpringBoot, Vue.JS, AWS)
JPA 관련 질문입니다. 기본적인 질문이라 선죄송...
강의 잘 듣고 있습니다. (그랜절) 다 듣고 수강평을 쓸 예정이지만, 중반이상 본 지금도 매우 만족흡족하면서 옆에서 개발지도해주는 느낌으로 잘 해나가고 있습니다. 게시글 수정쪽 와서 전반적인 구조와 설계를 살펴볼 겸 만지작 거려 봤는데 괜히 이 친구가 눈에 밟혀서... '왜 final 이지?' 라는 생각으로 final을 날려봤습니다. private final PostRepository postRepository; 근데 에러가 좌라라라라라라랅 메세지는 아래처럼 postRepository를 주입받지 못해서 null 이란거 같은데 Request processing failed; nested exception is java.lang.NullPointerException: Cannot invoke "com.in2l.in2Leisure.api.repository.PostRepository.save(Object)" because "this.postRepository" is null org.springframework.web.util.NestedServletException: Request processing failed; nested exception is java.lang.NullPointerException: Cannot invoke "com.in2l.in2Leisure.api.repository.PostRepository.save(Object)" because "this.postRepository" is null at app//org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) 다시한번 제 기본기 없음을 한탄하며, 제가 아는 (써오던) final은 그저 더이상 바꾸기 싫을때 final이라는 단편적인 생각이고 JPARepository를 상속받은 postRepository가 final은 왜 필요하고, final이 없으면 왜 주입을 못받는건지 모르겠습니다.... (아니면 주입의 문제가 아닌건가요? 설마....) 너무 근본, 근원, 기본적인 질문이라 적절한 설명이 있는 블로그나 강의가 있으면 알려주셔도 어마무시 감사합니다.
-
해결됨자바 ORM 표준 JPA 프로그래밍 - 기본편
메이븐으로 프로젝트를 만들때 강의처럼 만들어지지가 않습니다
안녕하세요. 이번에 새로 다시 프로젝트를 만들어 실행해보려고 강의에서 보여주신 대로 메이븐으로 만들어봤는데 강사님께서 만드시는 화면과 제가 만드는 화면이 다르더라고요. Archetype 과 version 을 필수로 입력해야 넘어가지던데 여기에 뭘 추가해줘야하나요? 혹시 몰라 jpa-maven-archetype 을 선택해봤는데 아래처럼 에러가 납니다. 자바 8로도 해봤는데 똑같은 증상을 보입니다!
-
미해결실전! 스프링 부트와 JPA 활용1 - 웹 애플리케이션 개발
엔티티 작성시, protected 만드는 구분
엔티티 생성할때, @NoArgsConstructor(access = AccessLevel.PROTECTED) 엔티티 생성할 때, 해당 어노테이션을 언제 사용하고 언제 사용하지 않는지 궁금합니다.
-
미해결실전! 스프링 부트와 JPA 활용1 - 웹 애플리케이션 개발
gradlew clean build 오류
[질문 템플릿]1. 강의 내용과 관련된 질문인가요? (예/아니오)2. 인프런의 질문 게시판과 자주 하는 질문에 없는 내용인가요? (예/아니오)3. 질문 잘하기 메뉴얼을 읽어보셨나요? (예/아니오)[질문 내용] 강의 20:08 쯤에 gradlew clean build 하는 부분에서 오류가 납니다. 구글링해보니 java 버전 문제라 하여 settings랑 project Structure도 다 java 11.0.15.1 인거 다 확인했는데도 그러네요. 도움 부탁드립니다. 윈도우 cmd에서 java 버전 확인 Settings>Gradle 입니다. Settings>Java Compiler 입니다. Project Structure>Platform Settings>SDKs 화면입니다. bulid.gradle 입니다. plugins { id 'org.springframework.boot' version '2.6.10' id 'io.spring.dependency-management' version '1.0.12.RELEASE' id 'java' } group = 'jpabook' version = '0.0.1-SNAPSHOT' sourceCompatibility = '11' configurations { compileOnly { extendsFrom annotationProcessor } } repositories { mavenCentral() } dependencies { implementation 'org.springframework.boot:spring-boot-starter-data-jpa' implementation 'org.springframework.boot:spring-boot-starter-thymeleaf' implementation 'org.springframework.boot:spring-boot-starter-validation' implementation 'org.springframework.boot:spring-boot-starter-web' implementation 'org.springframework.boot:spring-boot-devtools' compileOnly 'org.projectlombok:lombok' runtimeOnly 'com.h2database:h2' annotationProcessor 'org.projectlombok:lombok' testImplementation 'org.springframework.boot:spring-boot-starter-test' //JUnit4 추가 testImplementation("org.junit.vintage:junit-vintage-engine") { exclude group: "org.hamcrest", module: "hamcrest-core" } } tasks.named('test') { useJUnitPlatform() } cmd에서 gradlew clean build --stacktrace 했을 때 나오는 오류 메시지 입니다. > Task :compileJava FAILEDFAILURE: Build failed with an exception.* What went wrong: Execution failed for task ':compileJava'. > invalid source release: 11* Try: > Run with --info or --debug option to get more log output. > Run with --scan to get full insights.* Exception is: org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':compileJava'. at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.lambda$executeIfValid$1(ExecuteActionsTaskExecuter.java:142) at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:282) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:140) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:128) at org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:77) at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46) at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:51) at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57) at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:56) at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53) at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52) at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:69) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:327) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:314) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:307) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:293) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:417) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:339) at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64) at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48) Caused by: java.lang.IllegalArgumentException: invalid source release: 11 at com.sun.tools.javac.main.OptionHelper$GrumpyHelper.error(OptionHelper.java:103) at com.sun.tools.javac.main.Option$11.process(Option.java:204) at com.sun.tools.javac.api.JavacTool.processOptions(JavacTool.java:217) at com.sun.tools.javac.api.JavacTool.getTask(JavacTool.java:156) at com.sun.tools.javac.api.JavacTool.getTask(JavacTool.java:107) at com.sun.tools.javac.api.JavacTool.getTask(JavacTool.java:64) at org.gradle.api.internal.tasks.compile.JdkTools$DefaultIncrementalAwareCompiler.getTask(JdkTools.java:139) at org.gradle.api.internal.tasks.compile.JdkJavaCompiler.createCompileTask(JdkJavaCompiler.java:70) at org.gradle.api.internal.tasks.compile.JdkJavaCompiler.execute(JdkJavaCompiler.java:53) at org.gradle.api.internal.tasks.compile.JdkJavaCompiler.execute(JdkJavaCompiler.java:39) at org.gradle.api.internal.tasks.compile.NormalizingJavaCompiler.delegateAndHandleErrors(NormalizingJavaCompiler.java:97) at org.gradle.api.internal.tasks.compile.NormalizingJavaCompiler.execute(NormalizingJavaCompiler.java:51) at org.gradle.api.internal.tasks.compile.NormalizingJavaCompiler.execute(NormalizingJavaCompiler.java:37) at org.gradle.api.internal.tasks.compile.AnnotationProcessorDiscoveringCompiler.execute(AnnotationProcessorDiscoveringCompiler.java:51) at org.gradle.api.internal.tasks.compile.AnnotationProcessorDiscoveringCompiler.execute(AnnotationProcessorDiscoveringCompiler.java:37) at org.gradle.api.internal.tasks.compile.ModuleApplicationNameWritingCompiler.execute(ModuleApplicationNameWritingCompiler.java:46) at org.gradle.api.internal.tasks.compile.ModuleApplicationNameWritingCompiler.execute(ModuleApplicationNameWritingCompiler.java:36) at org.gradle.jvm.toolchain.internal.DefaultToolchainJavaCompiler.execute(DefaultToolchainJavaCompiler.java:57) at org.gradle.api.tasks.compile.JavaCompile.lambda$createToolchainCompiler$1(JavaCompile.java:232) at org.gradle.api.internal.tasks.compile.CleaningJavaCompiler.execute(CleaningJavaCompiler.java:53) at org.gradle.api.internal.tasks.compile.incremental.IncrementalCompilerFactory.lambda$createRebuildAllCompiler$0(IncrementalCompilerFactory.java:52) at org.gradle.api.internal.tasks.compile.incremental.SelectiveCompiler.execute(SelectiveCompiler.java:67) at org.gradle.api.internal.tasks.compile.incremental.SelectiveCompiler.execute(SelectiveCompiler.java:41) at org.gradle.api.internal.tasks.compile.incremental.IncrementalResultStoringCompiler.execute(IncrementalResultStoringCompiler.java:66) at org.gradle.api.internal.tasks.compile.incremental.IncrementalResultStoringCompiler.execute(IncrementalResultStoringCompiler.java:52) at org.gradle.api.internal.tasks.compile.CompileJavaBuildOperationReportingCompiler$2.call(CompileJavaBuildOperationReportingCompiler.java:59) at org.gradle.api.internal.tasks.compile.CompileJavaBuildOperationReportingCompiler$2.call(CompileJavaBuildOperationReportingCompiler.java:51) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53) at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73) at org.gradle.api.internal.tasks.compile.CompileJavaBuildOperationReportingCompiler.execute(CompileJavaBuildOperationReportingCompiler.java:51) at org.gradle.api.tasks.compile.JavaCompile.performCompilation(JavaCompile.java:279) at org.gradle.api.tasks.compile.JavaCompile.performIncrementalCompilation(JavaCompile.java:165) at org.gradle.api.tasks.compile.JavaCompile.compile(JavaCompile.java:146) at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:125) at org.gradle.api.internal.project.taskfactory.IncrementalInputsTaskAction.doExecute(IncrementalInputsTaskAction.java:32) at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:51) at org.gradle.api.internal.project.taskfactory.AbstractIncrementalTaskAction.execute(AbstractIncrementalTaskAction.java:25) at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:29) at org.gradle.api.internal.tasks.execution.TaskExecution$3.run(TaskExecution.java:236) at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:29) at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:26) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.run(DefaultBuildOperationRunner.java:47) at org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:68) at org.gradle.api.internal.tasks.execution.TaskExecution.executeAction(TaskExecution.java:221) at org.gradle.api.internal.tasks.execution.TaskExecution.executeActions(TaskExecution.java:204) at org.gradle.api.internal.tasks.execution.TaskExecution.executeWithPreviousOutputFiles(TaskExecution.java:187) at org.gradle.api.internal.tasks.execution.TaskExecution.execute(TaskExecution.java:165) at org.gradle.internal.execution.steps.ExecuteStep.executeInternal(ExecuteStep.java:89) at org.gradle.internal.execution.steps.ExecuteStep.access$000(ExecuteStep.java:40) at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:53) at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:50) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53) at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73) at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:50) at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:40) at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:68) at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:38) at org.gradle.internal.execution.steps.CancelExecutionStep.execute(CancelExecutionStep.java:41) at org.gradle.internal.execution.steps.TimeoutStep.executeWithoutTimeout(TimeoutStep.java:74) at org.gradle.internal.execution.steps.TimeoutStep.execute(TimeoutStep.java:55) at org.gradle.internal.execution.steps.CreateOutputsStep.execute(CreateOutputsStep.java:51) at org.gradle.internal.execution.steps.CreateOutputsStep.execute(CreateOutputsStep.java:29) at org.gradle.internal.execution.steps.CaptureStateAfterExecutionStep.executeDelegateBroadcastingChanges(CaptureStateAfterExecutionStep.java:124) at org.gradle.internal.execution.steps.CaptureStateAfterExecutionStep.execute(CaptureStateAfterExecutionStep.java:80) at org.gradle.internal.execution.steps.CaptureStateAfterExecutionStep.execute(CaptureStateAfterExecutionStep.java:58) at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:48) at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:36) at org.gradle.internal.execution.steps.BuildCacheStep.executeWithoutCache(BuildCacheStep.java:181) at org.gradle.internal.execution.steps.BuildCacheStep.lambda$execute$1(BuildCacheStep.java:71) at org.gradle.internal.Either$Right.fold(Either.java:175) at org.gradle.internal.execution.caching.CachingState.fold(CachingState.java:59) at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:69) at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:47) at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:36) at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:25) at org.gradle.internal.execution.steps.RecordOutputsStep.execute(RecordOutputsStep.java:36) at org.gradle.internal.execution.steps.RecordOutputsStep.execute(RecordOutputsStep.java:22) at org.gradle.internal.execution.steps.SkipUpToDateStep.executeBecause(SkipUpToDateStep.java:110) at org.gradle.internal.execution.steps.SkipUpToDateStep.lambda$execute$2(SkipUpToDateStep.java:56) at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:56) at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:38) at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:73) at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:44) at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:37) at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:27) at org.gradle.internal.execution.steps.ResolveCachingStateStep.execute(ResolveCachingStateStep.java:89) at org.gradle.internal.execution.steps.ResolveCachingStateStep.execute(ResolveCachingStateStep.java:50) at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:114) at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:57) at org.gradle.internal.execution.steps.CaptureStateBeforeExecutionStep.execute(CaptureStateBeforeExecutionStep.java:76) at org.gradle.internal.execution.steps.CaptureStateBeforeExecutionStep.execute(CaptureStateBeforeExecutionStep.java:50) at org.gradle.internal.execution.steps.SkipEmptyWorkStep.executeWithNoEmptySources(SkipEmptyWorkStep.java:254) at org.gradle.internal.execution.steps.SkipEmptyWorkStep.executeWithNoEmptySources(SkipEmptyWorkStep.java:209) at org.gradle.internal.execution.steps.SkipEmptyWorkStep.execute(SkipEmptyWorkStep.java:88) at org.gradle.internal.execution.steps.SkipEmptyWorkStep.execute(SkipEmptyWorkStep.java:56) at org.gradle.internal.execution.steps.RemoveUntrackedExecutionStateStep.execute(RemoveUntrackedExecutionStateStep.java:32) at org.gradle.internal.execution.steps.RemoveUntrackedExecutionStateStep.execute(RemoveUntrackedExecutionStateStep.java:21) at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsStartedStep.execute(MarkSnapshottingInputsStartedStep.java:38) at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:43) at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:31) at org.gradle.internal.execution.steps.AssignWorkspaceStep.lambda$execute$0(AssignWorkspaceStep.java:40) at org.gradle.api.internal.tasks.execution.TaskExecution$4.withWorkspace(TaskExecution.java:281) at org.gradle.internal.execution.steps.AssignWorkspaceStep.execute(AssignWorkspaceStep.java:40) at org.gradle.internal.execution.steps.AssignWorkspaceStep.execute(AssignWorkspaceStep.java:30) at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:37) at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:27) at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:44) at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:33) at org.gradle.internal.execution.impl.DefaultExecutionEngine$1.execute(DefaultExecutionEngine.java:76) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:139) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:128) at org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:77) at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46) at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:51) at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57) at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:56) at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53) at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52) at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:69) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:327) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:314) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:307) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:293) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:417) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:339) at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64) at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48) * Get more help at https://help.gradle.org
-
해결됨자바 ORM 표준 JPA 프로그래밍 - 기본편
GenerationType.SEQUENCE allocationSize 질문
시퀀스 증가값이 50이라고 가정했을때 최초실행시 시퀀스를 두번 불러와서 1과 51을 가져오는것으로 이해했습니다. 이때 만약 allocationSize = 1 이여도 최초실행했을때는 1~51까지 insert를 모았다가 보낸다고 생각해도 될까요? 최초인 1~51이 지난후에는 52,102,152,,,로 한개씩 insert로 보낸다고 생각이들어서요. 즉 최초 실행시 allocationSize가 시퀀스 증가값보다 작다면 jpa는 1부터 1+[시퀀스증가값]을 사용할 수 있기때문에 allocationSize보다 더 많은 값인 1부터 1+[시퀀스증가값]만큼 보낼 수 있다 라고 이해해도 될까요?
-
미해결실전! 스프링 부트와 JPA 활용1 - 웹 애플리케이션 개발
setter없이 생성메서드 생성하는 방법
안녕하세요 강사님, 서포터즈님들 JPA 실전까지 모두 완강하고 개인 프로젝트를 진행 중입니다. 강사님께서 강의 때 setter의 사용을 지양하라고 하셨는데, 생성메서드 생성 시 setter를 사용하지 않고 제가 짠 코드대로 해도 문제가 없을까요? --------- 생각해보니까 이것도 setter를 사용한거나 다름 없는 것 같은데, 생성메서드 생성할 때 setter없이 하는 방법이 있나요? @Id @GeneratedValue @Column(name = "fileId") private Long id; private String fileNm; private String path; private Long size; private String extension; private String fileType; @JoinColumn(name = "restaurantId") @OneToOne(fetch = FetchType.LAZY) private Restaurant restaurant; @JoinColumn(name = "menuId") @OneToOne(fetch = FetchType.LAZY) private Menu menu; //생성메서드 public static FileEntity createFile(FileEntity fileInfo) { FileEntity file = new FileEntity(); file.setFile(fileInfo.getFileNm(), fileInfo.getPath(), fileInfo.getSize(), fileInfo.getExtension(), fileInfo.getFileType()); return file; } public void setFile(String fileNm, String path, Long size, String extension, String fileType) { this.fileNm = fileNm; this.path = path; this.size = size; this.extension = extension; this.fileType = fileType; }
-
미해결자바 ORM 표준 JPA 프로그래밍 - 기본편
persistence.createEntityManagerFactory 부분 오류
[질문 템플릿]1. 강의 내용과 관련된 질문인가요? 예2. 인프런의 질문 게시판과 자주 하는 질문에 없는 내용인가요? 예3. 질문 잘하기 메뉴얼을 읽어보셨나요? 예[질문 내용] 강의 그대로 따라가여 실행을 해봤는데 이러한 오류가 발생합니다. 구글에 찾아봤지만 해결방법이 나오지 않아서 질문드립니다.
-
미해결실전! 스프링 부트와 JPA 활용2 - API 개발과 성능 최적화
DTO 사용방법
학습하는 분들께 도움이 되고, 더 좋은 답변을 드릴 수 있도록 질문전에 다음을 꼭 확인해주세요.1. 강의 내용과 관련된 질문을 남겨주세요.2. 인프런의 질문 게시판과 자주 하는 질문(링크)을 먼저 확인해주세요.(자주 하는 질문 링크: https://bit.ly/3fX6ygx)3. 질문 잘하기 메뉴얼(링크)을 먼저 읽어주세요.(질문 잘하기 메뉴얼 링크: https://bit.ly/2UfeqCG)질문 시에는 위 내용은 삭제하고 다음 내용을 남겨주세요.=========================================[질문 템플릿]1. 강의 내용과 관련된 질문인가요? 예2. 인프런의 질문 게시판과 자주 하는 질문에 없는 내용인가요? 예3. 질문 잘하기 메뉴얼을 읽어보셨나요? 예[질문 내용]수업을 듣는데 궁금한 점이 생겨 질문을 올립니다. 데이터를 주고 받는 과정에서 엔티티를 직접 사용하는 것보다 DTO를 사용하는것이 더 좋다고 설명하시면서 controller 안에 필요한 class를 만들어서 사용하시는데 실제 업무에서나 일반적인 상황에서 따로 클래스를 만들어서 사용하는게 좋은지 강의에서의 방법처럼 controller안에서 필요할때 만들어서 사용하는 것이 좋은지 궁금해서 질문 올립니다.
-
미해결실전! 스프링 부트와 JPA 활용1 - 웹 애플리케이션 개발
ItemServiceTest 질문 있습니다!!
MemberServiceTest의 회원가입과 동일하게 ItemServiceTest 코드를 작성해 보았는데 다음과 같은 에러가 발생하네요... 왜 다른지 이해가 되지 않아서 질문드립니다.
-
미해결자바 ORM 표준 JPA 프로그래밍 - 기본편
JPA 사용시 DB 설계 관련 질문입니다!
안녕하세요 강사님. 보통의 프로젝트를 할 경우 DB 테이블 설계를 먼저 하는것으로 알고있는데요. JPA를 사용할 시에 테이블을 먼저 설계하고 그에 맞춰 엔티티를 작성하는지, 또는 먼저 엔티티를 중심으로 설계하는지 궁금합니다! 감사합니다.
-
미해결자바 ORM 표준 JPA 프로그래밍 - 기본편
DDD 와 엔티티 매핑
안녕하세요. JPA를 사용해서 엔티티 매핑할 때 DDD에서 말하는 엔티티, 즉 도메인 객체를 바로 매핑하시나요? 아니면 JPA 엔티티를 도메인 객체와는 별개로 만들어서 매핑하시나요? 예제로 가르쳐주시는 건 바로 매핑하시는데, 혹시 실무에선 또 다른지 궁금해서 질문 드립니다.
-
해결됨자바 ORM 표준 JPA 프로그래밍 - 기본편
registerFunction() 쓸수 없음
1. 강의 내용과 관련된 질문인가요? (예)2. 인프런의 질문 게시판과 자주 하는 질문에 없는 내용인가요? (예)3. 질문 잘하기 메뉴얼을 읽어보셨나요? (예)[질문 내용]registerFunction()을 쓸수 없습니다. 참고로 하이버네이트는 6.0.0.Alpha 7버전을 쓰고 있구요. 혹시라해서 버전을 5.6.10.Final로 바꾸면 쓸수 있더라구요. JPA공부하면서 6.0.0.Alpha 7버전을 쭉 쓰고 있는데 그런데 문제는 한 번도 안 생겼어요. 이 부분에서 궁금한점은 요즘에 JPAL함수 이렇게 사용 안 하시나요? 버전별H2Dialect클래스에 들어가면 이부분이 많이 바꿔있더라구요. 그래서 실무에서는 보통 final버전만 쓰는 건가요? 물론 강의는 2019년 기준이라서 그때는 alpha버전 없지만. 햇갈리네요^^. 저는 최신버전이면 쓰는 것이 좋을것 같아서. 그리고 alpha버전은 도데체 정식 버전인가요. 아니면 테스트중인 버전인가요.
-
미해결실전! Querydsl
quertdsl에서 projection을 이용해 @onetoMany dto를 내리고싶을때
프로젝션을 이용한 결과 반환을 배웠는데요, 보통 양방향일 경우 1:n 관계는 조인해서 같이 반환하고 싶을때 fetchjoin()만 이용했다가 디비상으로는 1:n 이지만 단방향인 경우에 querydsl의 projection을 이용해서 dto를 내릴순 없을까요 ? https://bbuljj.github.io/querydsl/2021/05/17/jpa-querydsl-projection-list.html 이런 예제들처럼 양방향 1:n에 대한 예제는 많이 보이는데 단방향인 경우에는 예제가 없는거같아서... 여러 시도를 해봤는데 막히고 있어 궁금합니다 jooq 같은 라이브러리를 써야하는지, querydsl로는 해결방법이 없을까요? 예를 들면 .. TeamDto { String teamName; List<MemberDTO> members; } 이런 구조를 querydsl로 바로 표현할순 없는지..
-
미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
spring boot 2.7.x 버전 yml 설정 이렇게 하면 됩니다.
강의에서 안내하는 bootstrap.yml을 똑같이 만들어주고 application.yml에서 아래 코드를 추가하면 됩니다. 아래서 경로 자체를 연결하는 방법도 댓글로 알려주셨는데 저는 저게 잘 안 되서 이렇게 해결했습니다. spring: config: import: - classpath:/bootstrap.yml
-
미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
Run/debug Configuration 에서 Enviroment 옵션이 보이지 않아요.
안녕하세요. 강의 수강중 intellij 버전이 달라서인지, jdk 버전이 달라서인지 몰라도 아래 run/debug config 화면에서 Enviroment 옵션이 보이지 않습니다. 유레카 실습 중이고 한개의 프로젝트에서 여러 서비스를 실행시킬 때 VM option을 사용해서 포트를 변경 하려 하는데 해당 옵션이 보이지 않아 문의 드립니다. Intellij 버전은IntelliJ IDEA 2022.2 (Ultimate Edition) / Build #IU-222.3345.118, built on July 26, 2022 입니다. 구글링을 해도 못찾겠어서 질문 드려요~! 이것 저것 누르다 보니 찾았습니다. Modify options 에서 VM options 라는 항목이 따로 있네요... 혹시 run/debug configuration 창의 형태(?)를 강의 하시는 화면과 동일하게 변경하는 옵션이 따로 있을까요..?