25%
66,000원
다른 수강생들이 자주 물어보는 질문이 궁금하신가요?
- 미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
docker run -e environment 반영 안되는 문제
안녕하세요,로컬에서 테스트 마치고 도커로 프로비저닝 작업하고 있는 중에 이슈가 생겨 문의드립니다.// Docker image build docker build --tag msa-api-gateway:1.0 . // Docker container run docker run -d -p 8000:8000 --network msa-network \ -e "spring.cloud.config.uri=http://config-server:8888" \ -e "spring.rabbitmq.host=rabbitmq" \ -e "eureka.client.serviceUrl.defaultZone=http://service-discovery:8761/" \ --name api-gateway \ msa-api-gateway:1.0위 커맨드로 api-gateway 도커 이미지 빌드 후 실행시 로그를 보니 connection failed 이슈가 발생해서 원인을 확인했는데요,-e 옵션으로 준eureka.client.serviceUrl.defaultZone=http://service-discovery:8761/이 반영이 안되고 기존 application.yml에 정의되어 있는 localhost:8761/eureka가 호출되어 문제가 되는 것으로 확인됩니다.(application.yml 내에서 직접 defaultZone 수정하니깐 정상 동작됩니다. 또 확인차 application.yml에선 service-discovery로 하고 -e 옵션으로 localhost로 설정하여 의도적으로 에러를 발생시키려고 했는데, 역시나 반영이 안되고 정상적으로 동작되었습니다.)참고로 application.yml은 별도 github에 정의되어 있고 config server로 가져오고 있는데요, 우선순위같은게 있을까요? 도움 부탁드립니다.(추가)기존에 application.yml에 spring.rabbitmq.host:127.0.0.1 로 되어있는 것에 -e 옵션으로 rabbitmq로 실행할 때도 반영이 안되네욥ㅠ application.yml에서 직접 rabbitmq로 수정해줘야 동작합니다.Config github 내에서 관리하는 api-gateway-server.yml 입니다.server: port: 8000 eureka: client: register-with-eureka: true fetch-registry: true service-url: defaultZone: http://service-discovery:8761/eureka # http://localhost:8761/eureka spring: application: name: api-gateway-server rabbitmq: # Spring Cloud Bus 적용을 위한 RabbitMQ host: rabbitmq #127.0.0.1 port: 5672 # rabbitmq admin port는 15672 username: guest password: guest cloud: gateway: default-filters: - name: GlobalFilter args: baseMessage: Spring Cloud Gateway Global Filter preLogger: true postLogger: true routes: - id: user-service uri: lb://USER-SERVICE predicates: - Path=/user-service/login # 로그인 - Method=POST filters: - RemoveRequestHeader=Cookie - RewritePath=/user-service/(?<segment>.*), /$\{segment} - id: user-service uri: lb://USER-SERVICE predicates: - Path=/user-service/users # 회원가입 - Method=POST filters: - RemoveRequestHeader=Cookie - RewritePath=/user-service/(?<segment>.*), /$\{segment} - id: user-service uri: lb://USER-SERVICE predicates: - Path=/user-service/actuator/** - Method=GET, POST filters: - RemoveRequestHeader=Cookie - RewritePath=/user-service/(?<segment>.*), /$\{segment} - id: user-service uri: lb://USER-SERVICE predicates: - Path=/user-service/** - Method=GET filters: - RemoveRequestHeader=Cookie - RewritePath=/user-service/(?<segment>.*), /$\{segment} - AuthorizationHeaderFilter - id: order-service uri: lb://ORDER-SERVICE predicates: - Path=/order-service/** - Method=GET,POST filters: - RemoveRequestHeader=Cookie - RewritePath=/order-service/(?<segment>.*), /$\{segment} - id: catalog-service uri: lb://CATALOG-SERVICE predicates: - Path=/catalog-service/** - Method=GET,POST filters: - RemoveRequestHeader=Cookie - RewritePath=/catalog-service/(?<segment>.*), /$\{segment} management: # springboot actuator(운영중인 어플리케이션 모니터링 기능) 옵션 설정 endpoints: web: exposure: include: refresh, health, beans, httptrace, info, busrefresh token: # User-service Spring Security 인증토큰 expiration_time: 86400000 secret: userTokenForMSASkeletonTemplateHS256LongerThan256Bytes 혹시 몰라 도커 에러 메세지 남깁니다. . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v2.7.5) 2022-12-28 20:50:19.284 INFO 1 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Fetching config from server at : http://config-server:8888 2022-12-28 20:50:19.697 INFO 1 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Located environment: name=api-gateway-server, profiles=[default], label=null, version=32b9a3dfd58bb8bb5ef5e7a412b2433fc2d94791, state=null 2022-12-28 20:50:19.699 INFO 1 --- [ main] b.c.PropertySourceBootstrapConfiguration : Located property source: [BootstrapPropertySource {name='bootstrapProperties-configClient'}, BootstrapPropertySource {name='bootstrapProperties-https://github.daumkakao.com/ch-svc-dev2/template-cloud-config/api-gateway-server/api-gateway-server.yml'}] 2022-12-28 20:50:19.708 INFO 1 --- [ main] c.a.ApiGatewayApplication : No active profile set, falling back to 1 default profile: "default" 2022-12-28 20:50:20.482 INFO 1 --- [ main] faultConfiguringBeanFactoryPostProcessor : No bean named 'errorChannel' has been explicitly defined. Therefore, a default PublishSubscribeChannel will be created. 2022-12-28 20:50:20.492 INFO 1 --- [ main] faultConfiguringBeanFactoryPostProcessor : No bean named 'integrationHeaderChannelRegistry' has been explicitly defined. Therefore, a default DefaultHeaderChannelRegistry will be created. 2022-12-28 20:50:20.571 INFO 1 --- [ main] o.s.cloud.context.scope.GenericScope : BeanFactory id=5f2180db-75bc-34c4-8dcd-bd49c1d7cf8d 2022-12-28 20:50:20.633 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.integration.config.IntegrationManagementConfiguration' of type [org.springframework.integration.config.IntegrationManagementConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-28 20:50:20.637 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'integrationChannelResolver' of type [org.springframework.integration.support.channel.BeanFactoryChannelResolver] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-28 20:50:20.646 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.client.loadbalancer.reactive.LoadBalancerBeanPostProcessorAutoConfiguration' of type [org.springframework.cloud.client.loadbalancer.reactive.LoadBalancerBeanPostProcessorAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-28 20:50:20.647 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.client.loadbalancer.reactive.LoadBalancerBeanPostProcessorAutoConfiguration$ReactorDeferringLoadBalancerFilterConfig' of type [org.springframework.cloud.client.loadbalancer.reactive.LoadBalancerBeanPostProcessorAutoConfiguration$ReactorDeferringLoadBalancerFilterConfig] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-28 20:50:20.648 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'reactorDeferringLoadBalancerExchangeFilterFunction' of type [org.springframework.cloud.client.loadbalancer.reactive.DeferringLoadBalancerExchangeFilterFunction] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-28 20:50:21.347 INFO 1 --- [ main] o.s.c.g.r.RouteDefinitionRouteLocator : Loaded RoutePredicateFactory [After] 2022-12-28 20:50:21.347 INFO 1 --- [ main] o.s.c.g.r.RouteDefinitionRouteLocator : Loaded RoutePredicateFactory [Before] 2022-12-28 20:50:21.347 INFO 1 --- [ main] o.s.c.g.r.RouteDefinitionRouteLocator : Loaded RoutePredicateFactory [Between] 2022-12-28 20:50:21.347 INFO 1 --- [ main] o.s.c.g.r.RouteDefinitionRouteLocator : Loaded RoutePredicateFactory [Cookie] 2022-12-28 20:50:21.347 INFO 1 --- [ main] o.s.c.g.r.RouteDefinitionRouteLocator : Loaded RoutePredicateFactory [Header] 2022-12-28 20:50:21.348 INFO 1 --- [ main] o.s.c.g.r.RouteDefinitionRouteLocator : Loaded RoutePredicateFactory [Host] 2022-12-28 20:50:21.351 INFO 1 --- [ main] o.s.c.g.r.RouteDefinitionRouteLocator : Loaded RoutePredicateFactory [Method] 2022-12-28 20:50:21.352 INFO 1 --- [ main] o.s.c.g.r.RouteDefinitionRouteLocator : Loaded RoutePredicateFactory [Path] 2022-12-28 20:50:21.352 INFO 1 --- [ main] o.s.c.g.r.RouteDefinitionRouteLocator : Loaded RoutePredicateFactory [Query] 2022-12-28 20:50:21.352 INFO 1 --- [ main] o.s.c.g.r.RouteDefinitionRouteLocator : Loaded RoutePredicateFactory [ReadBody] 2022-12-28 20:50:21.352 INFO 1 --- [ main] o.s.c.g.r.RouteDefinitionRouteLocator : Loaded RoutePredicateFactory [RemoteAddr] 2022-12-28 20:50:21.352 INFO 1 --- [ main] o.s.c.g.r.RouteDefinitionRouteLocator : Loaded RoutePredicateFactory [XForwardedRemoteAddr] 2022-12-28 20:50:21.352 INFO 1 --- [ main] o.s.c.g.r.RouteDefinitionRouteLocator : Loaded RoutePredicateFactory [Weight] 2022-12-28 20:50:21.358 INFO 1 --- [ main] o.s.c.g.r.RouteDefinitionRouteLocator : Loaded RoutePredicateFactory [CloudFoundryRouteService] 2022-12-28 20:50:21.824 INFO 1 --- [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 6 endpoint(s) beneath base path '/actuator' 2022-12-28 20:50:22.059 INFO 1 --- [ main] o.s.c.s.m.DirectWithAttributesChannel : Channel 'application-1.springCloudBusInput' has 1 subscriber(s). 2022-12-28 20:50:22.142 INFO 1 --- [ main] DiscoveryClientOptionalArgsConfiguration : Eureka HTTP Client uses RestTemplate. 2022-12-28 20:50:22.221 WARN 1 --- [ main] iguration$LoadBalancerCaffeineWarnLogger : Spring Cloud LoadBalancer is currently working with the default cache. While this cache implementation is useful for development and tests, it's recommended to use Caffeine cache in production.You can switch to using Caffeine cache, by adding it and org.springframework.cache.caffeine.CaffeineCacheManager to the classpath. 2022-12-28 20:50:22.274 INFO 1 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel 2022-12-28 20:50:22.275 INFO 1 --- [ main] o.s.i.channel.PublishSubscribeChannel : Channel 'application-1.errorChannel' has 1 subscriber(s). 2022-12-28 20:50:22.275 INFO 1 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started bean '_org.springframework.integration.errorLogger' 2022-12-28 20:50:22.284 INFO 1 --- [ main] o.s.c.n.eureka.InstanceInfoFactory : Setting initial instance status as: STARTING 2022-12-28 20:50:22.353 INFO 1 --- [ main] com.netflix.discovery.DiscoveryClient : Initializing Eureka in region us-east-1 2022-12-28 20:50:22.358 INFO 1 --- [ main] c.n.d.s.r.aws.ConfigClusterResolver : Resolving eureka endpoints via configuration 2022-12-28 20:50:22.388 INFO 1 --- [ main] com.netflix.discovery.DiscoveryClient : Disable delta property : false 2022-12-28 20:50:22.388 INFO 1 --- [ main] com.netflix.discovery.DiscoveryClient : Single vip registry refresh property : null 2022-12-28 20:50:22.388 INFO 1 --- [ main] com.netflix.discovery.DiscoveryClient : Force full registry fetch : false 2022-12-28 20:50:22.388 INFO 1 --- [ main] com.netflix.discovery.DiscoveryClient : Application is null : false 2022-12-28 20:50:22.388 INFO 1 --- [ main] com.netflix.discovery.DiscoveryClient : Registered Applications size is zero : true 2022-12-28 20:50:22.388 INFO 1 --- [ main] com.netflix.discovery.DiscoveryClient : Application version is -1: true 2022-12-28 20:50:22.388 INFO 1 --- [ main] com.netflix.discovery.DiscoveryClient : Getting all instance registry info from the eureka server 2022-12-28 20:50:22.542 INFO 1 --- [ main] c.n.d.s.t.d.RedirectingEurekaHttpClient : Request execution error. endpoint=DefaultEndpoint{ serviceUrl='http://localhost:8761/eureka/}, exception=I/O error on GET request for "http://localhost:8761/eureka/apps/": Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused; nested exception is org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused stacktrace=org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://localhost:8761/eureka/apps/": Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused; nested exception is org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:785) at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:711) at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:602) at org.springframework.cloud.netflix.eureka.http.RestTemplateEurekaHttpClient.getApplicationsInternal(RestTemplateEurekaHttpClient.java:145) at org.springframework.cloud.netflix.eureka.http.RestTemplateEurekaHttpClient.getApplications(RestTemplateEurekaHttpClient.java:135) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$6.execute(EurekaHttpClientDecorator.java:137) at com.netflix.discovery.shared.transport.decorator.RedirectingEurekaHttpClient.executeOnNewServer(RedirectingEurekaHttpClient.java:121) at com.netflix.discovery.shared.transport.decorator.RedirectingEurekaHttpClient.execute(RedirectingEurekaHttpClient.java:80) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.getApplications(EurekaHttpClientDecorator.java:134) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$6.execute(EurekaHttpClientDecorator.java:137) at com.netflix.discovery.shared.transport.decorator.RetryableEurekaHttpClient.execute(RetryableEurekaHttpClient.java:120) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.getApplications(EurekaHttpClientDecorator.java:134) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$6.execute(EurekaHttpClientDecorator.java:137) at com.netflix.discovery.shared.transport.decorator.SessionedEurekaHttpClient.execute(SessionedEurekaHttpClient.java:77) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.getApplications(EurekaHttpClientDecorator.java:134) at com.netflix.discovery.DiscoveryClient.getAndStoreFullRegistry(DiscoveryClient.java:1101) at com.netflix.discovery.DiscoveryClient.fetchRegistry(DiscoveryClient.java:1014) at com.netflix.discovery.DiscoveryClient.<init>(DiscoveryClient.java:441) at com.netflix.discovery.DiscoveryClient.<init>(DiscoveryClient.java:283) at com.netflix.discovery.DiscoveryClient.<init>(DiscoveryClient.java:279) at org.springframework.cloud.netflix.eureka.CloudEurekaClient.<init>(CloudEurekaClient.java:66) at org.springframework.cloud.netflix.eureka.EurekaClientAutoConfiguration$RefreshableEurekaClientConfiguration.eurekaClient(EurekaClientAutoConfiguration.java:295) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:78) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154) at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:653) at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:638) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1352) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1195) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$1(AbstractBeanFactory.java:374) at org.springframework.cloud.context.scope.GenericScope$BeanLifecycleWrapper.getBean(GenericScope.java:376) at org.springframework.cloud.context.scope.GenericScope.get(GenericScope.java:179) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:371) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) at org.springframework.aop.target.SimpleBeanTargetSource.getTarget(SimpleBeanTargetSource.java:35) at org.springframework.cloud.netflix.eureka.serviceregistry.EurekaRegistration.getTargetObject(EurekaRegistration.java:127) at org.springframework.cloud.netflix.eureka.serviceregistry.EurekaRegistration.getEurekaClient(EurekaRegistration.java:115) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:78) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:282) at org.springframework.cloud.context.scope.GenericScope$LockedScopedProxyFactoryBean.invoke(GenericScope.java:485) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:763) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:708) at org.springframework.cloud.netflix.eureka.serviceregistry.EurekaRegistration$$EnhancerBySpringCGLIB$$e271b02f.getEurekaClient(<generated>) at org.springframework.cloud.netflix.eureka.serviceregistry.EurekaServiceRegistry.maybeInitializeClient(EurekaServiceRegistry.java:54) at org.springframework.cloud.netflix.eureka.serviceregistry.EurekaServiceRegistry.register(EurekaServiceRegistry.java:38) at org.springframework.cloud.netflix.eureka.serviceregistry.EurekaAutoServiceRegistration.start(EurekaAutoServiceRegistration.java:83) at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:178) at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:54) at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:356) at java.base/java.lang.Iterable.forEach(Iterable.java:75) at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:155) at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:123) at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:935) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:586) at org.springframework.boot.web.reactive.context.ReactiveWebServerApplicationContext.refresh(ReactiveWebServerApplicationContext.java:66) at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:734) at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:408) at org.springframework.boot.SpringApplication.run(SpringApplication.java:308) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1306) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1295) at com.apigatewayserver.ApiGatewayApplication.main(ApiGatewayApplication.java:20) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:78) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:65) Caused by: org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) at org.springframework.http.client.HttpComponentsClientHttpRequest.executeInternal(HttpComponentsClientHttpRequest.java:87) at org.springframework.http.client.AbstractBufferingClientHttpRequest.executeInternal(AbstractBufferingClientHttpRequest.java:48) at org.springframework.http.client.AbstractClientHttpRequest.execute(AbstractClientHttpRequest.java:66) at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:776) ... 76 more Caused by: java.net.ConnectException: Connection refused at java.base/sun.nio.ch.Net.connect0(Native Method) at java.base/sun.nio.ch.Net.connect(Net.java:576) at java.base/sun.nio.ch.Net.connect(Net.java:565) at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:588) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:332) at java.base/java.net.Socket.connect(Socket.java:631) at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ... 89 more 2022-12-28 20:50:22.543 WARN 1 --- [ main] c.n.d.s.t.d.RetryableEurekaHttpClient : Request execution failed with message: I/O error on GET request for "http://localhost:8761/eureka/apps/": Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused; nested exception is org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused 2022-12-28 20:50:22.543 INFO 1 --- [ main] com.netflix.discovery.DiscoveryClient : DiscoveryClient_API-GATEWAY-SERVER/30a30ddf419f:api-gateway-server:8000 - was unable to refresh its cache! This periodic background refresh will be retried in 30 seconds. status = Cannot execute request on any known server stacktrace = com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server at com.netflix.discovery.shared.transport.decorator.RetryableEurekaHttpClient.execute(RetryableEurekaHttpClient.java:112) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.getApplications(EurekaHttpClientDecorator.java:134) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$6.execute(EurekaHttpClientDecorator.java:137) at com.netflix.discovery.shared.transport.decorator.SessionedEurekaHttpClient.execute(SessionedEurekaHttpClient.java:77) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.getApplications(EurekaHttpClientDecorator.java:134) at com.netflix.discovery.DiscoveryClient.getAndStoreFullRegistry(DiscoveryClient.java:1101) at com.netflix.discovery.DiscoveryClient.fetchRegistry(DiscoveryClient.java:1014) at com.netflix.discovery.DiscoveryClient.<init>(DiscoveryClient.java:441) at com.netflix.discovery.DiscoveryClient.<init>(DiscoveryClient.java:283) at com.netflix.discovery.DiscoveryClient.<init>(DiscoveryClient.java:279) at org.springframework.cloud.netflix.eureka.CloudEurekaClient.<init>(CloudEurekaClient.java:66) at org.springframework.cloud.netflix.eureka.EurekaClientAutoConfiguration$RefreshableEurekaClientConfiguration.eurekaClient(EurekaClientAutoConfiguration.java:295) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:78) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154) at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:653) at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:638) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1352) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1195) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$1(AbstractBeanFactory.java:374) at org.springframework.cloud.context.scope.GenericScope$BeanLifecycleWrapper.getBean(GenericScope.java:376) at org.springframework.cloud.context.scope.GenericScope.get(GenericScope.java:179) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:371) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) at org.springframework.aop.target.SimpleBeanTargetSource.getTarget(SimpleBeanTargetSource.java:35) at org.springframework.cloud.netflix.eureka.serviceregistry.EurekaRegistration.getTargetObject(EurekaRegistration.java:127) at org.springframework.cloud.netflix.eureka.serviceregistry.EurekaRegistration.getEurekaClient(EurekaRegistration.java:115) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:78) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:282) at org.springframework.cloud.context.scope.GenericScope$LockedScopedProxyFactoryBean.invoke(GenericScope.java:485) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:763) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:708) at org.springframework.cloud.netflix.eureka.serviceregistry.EurekaRegistration$$EnhancerBySpringCGLIB$$e271b02f.getEurekaClient(<generated>) at org.springframework.cloud.netflix.eureka.serviceregistry.EurekaServiceRegistry.maybeInitializeClient(EurekaServiceRegistry.java:54) at org.springframework.cloud.netflix.eureka.serviceregistry.EurekaServiceRegistry.register(EurekaServiceRegistry.java:38) at org.springframework.cloud.netflix.eureka.serviceregistry.EurekaAutoServiceRegistration.start(EurekaAutoServiceRegistration.java:83) at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:178) at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:54) at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:356) at java.base/java.lang.Iterable.forEach(Iterable.java:75) at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:155) at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:123) at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:935) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:586) at org.springframework.boot.web.reactive.context.ReactiveWebServerApplicationContext.refresh(ReactiveWebServerApplicationContext.java:66) at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:734) at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:408) at org.springframework.boot.SpringApplication.run(SpringApplication.java:308) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1306) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1295) at com.apigatewayserver.ApiGatewayApplication.main(ApiGatewayApplication.java:20) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:78) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:65) 2022-12-28 20:50:22.543 INFO 1 --- [ main] com.netflix.discovery.DiscoveryClient : Initial registry fetch from primary servers failed 2022-12-28 20:50:22.544 WARN 1 --- [ main] com.netflix.discovery.DiscoveryClient : Using default backup registry implementation which does not do anything. 2022-12-28 20:50:22.544 INFO 1 --- [ main] com.netflix.discovery.DiscoveryClient : Initial registry fetch from backup servers failed 2022-12-28 20:50:22.548 INFO 1 --- [ main] com.netflix.discovery.DiscoveryClient : Starting heartbeat executor: renew interval is: 30 2022-12-28 20:50:22.555 INFO 1 --- [ main] c.n.discovery.InstanceInfoReplicator : InstanceInfoReplicator onDemand update allowed rate per min is 4 2022-12-28 20:50:22.564 INFO 1 --- [ main] com.netflix.discovery.DiscoveryClient : Discovery Client initialized at timestamp 1672260622563 with initial instances count: 0 2022-12-28 20:50:22.566 INFO 1 --- [ main] o.s.c.n.e.s.EurekaServiceRegistry : Registering application API-GATEWAY-SERVER with eureka with status UP 2022-12-28 20:50:22.568 INFO 1 --- [ main] com.netflix.discovery.DiscoveryClient : Saw local status change event StatusChangeEvent [timestamp=1672260622568, current=UP, previous=STARTING] 2022-12-28 20:50:22.569 INFO 1 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_API-GATEWAY-SERVER/30a30ddf419f:api-gateway-server:8000: registering service... 2022-12-28 20:50:22.613 INFO 1 --- [nfoReplicator-0] c.n.d.s.t.d.RedirectingEurekaHttpClient : Request execution error. endpoint=DefaultEndpoint{ serviceUrl='http://localhost:8761/eureka/}, exception=I/O error on POST request for "http://localhost:8761/eureka/apps/API-GATEWAY-SERVER": Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused; nested exception is org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused stacktrace=org.springframework.web.client.ResourceAccessException: I/O error on POST request for "http://localhost:8761/eureka/apps/API-GATEWAY-SERVER": Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused; nested exception is org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:785) at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:711) at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:602) at org.springframework.cloud.netflix.eureka.http.RestTemplateEurekaHttpClient.register(RestTemplateEurekaHttpClient.java:77) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$1.execute(EurekaHttpClientDecorator.java:59) at com.netflix.discovery.shared.transport.decorator.RedirectingEurekaHttpClient.executeOnNewServer(RedirectingEurekaHttpClient.java:121) at com.netflix.discovery.shared.transport.decorator.RedirectingEurekaHttpClient.execute(RedirectingEurekaHttpClient.java:80) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.register(EurekaHttpClientDecorator.java:56) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$1.execute(EurekaHttpClientDecorator.java:59) at com.netflix.discovery.shared.transport.decorator.RetryableEurekaHttpClient.execute(RetryableEurekaHttpClient.java:120) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.register(EurekaHttpClientDecorator.java:56) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$1.execute(EurekaHttpClientDecorator.java:59) at com.netflix.discovery.shared.transport.decorator.SessionedEurekaHttpClient.execute(SessionedEurekaHttpClient.java:77) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.register(EurekaHttpClientDecorator.java:56) at com.netflix.discovery.DiscoveryClient.register(DiscoveryClient.java:876) at com.netflix.discovery.InstanceInfoReplicator.run(InstanceInfoReplicator.java:121) at com.netflix.discovery.InstanceInfoReplicator$1.run(InstanceInfoReplicator.java:101) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:831) Caused by: org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) at org.springframework.http.client.HttpComponentsClientHttpRequest.executeInternal(HttpComponentsClientHttpRequest.java:87) at org.springframework.http.client.AbstractBufferingClientHttpRequest.executeInternal(AbstractBufferingClientHttpRequest.java:48) at org.springframework.http.client.AbstractClientHttpRequest.execute(AbstractClientHttpRequest.java:66) at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:776) ... 22 more Caused by: java.net.ConnectException: Connection refused at java.base/sun.nio.ch.Net.connect0(Native Method) at java.base/sun.nio.ch.Net.connect(Net.java:576) at java.base/sun.nio.ch.Net.connect(Net.java:565) at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:588) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:332) at java.base/java.net.Socket.connect(Socket.java:631) at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ... 35 more 2022-12-28 20:50:22.613 WARN 1 --- [nfoReplicator-0] c.n.d.s.t.d.RetryableEurekaHttpClient : Request execution failed with message: I/O error on POST request for "http://localhost:8761/eureka/apps/API-GATEWAY-SERVER": Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused; nested exception is org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused 2022-12-28 20:50:22.616 WARN 1 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_API-GATEWAY-SERVER/30a30ddf419f:api-gateway-server:8000 - registration failed Cannot execute request on any known server com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server at com.netflix.discovery.shared.transport.decorator.RetryableEurekaHttpClient.execute(RetryableEurekaHttpClient.java:112) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.register(EurekaHttpClientDecorator.java:56) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$1.execute(EurekaHttpClientDecorator.java:59) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.shared.transport.decorator.SessionedEurekaHttpClient.execute(SessionedEurekaHttpClient.java:77) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.register(EurekaHttpClientDecorator.java:56) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.DiscoveryClient.register(DiscoveryClient.java:876) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.InstanceInfoReplicator.run(InstanceInfoReplicator.java:121) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.InstanceInfoReplicator$1.run(InstanceInfoReplicator.java:101) ~[eureka-client-1.10.17.jar!/:1.10.17] at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[na:na] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na] at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) ~[na:na] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135) ~[na:na] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[na:na] at java.base/java.lang.Thread.run(Thread.java:831) ~[na:na] 2022-12-28 20:50:22.617 WARN 1 --- [nfoReplicator-0] c.n.discovery.InstanceInfoReplicator : There was a problem with the instance info replicator com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server at com.netflix.discovery.shared.transport.decorator.RetryableEurekaHttpClient.execute(RetryableEurekaHttpClient.java:112) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.register(EurekaHttpClientDecorator.java:56) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$1.execute(EurekaHttpClientDecorator.java:59) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.shared.transport.decorator.SessionedEurekaHttpClient.execute(SessionedEurekaHttpClient.java:77) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.register(EurekaHttpClientDecorator.java:56) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.DiscoveryClient.register(DiscoveryClient.java:876) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.InstanceInfoReplicator.run(InstanceInfoReplicator.java:121) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.InstanceInfoReplicator$1.run(InstanceInfoReplicator.java:101) ~[eureka-client-1.10.17.jar!/:1.10.17] at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[na:na] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na] at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) ~[na:na] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135) ~[na:na] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[na:na] at java.base/java.lang.Thread.run(Thread.java:831) ~[na:na] 2022-12-28 20:50:22.772 INFO 1 --- [ main] c.s.b.r.p.RabbitExchangeQueueProvisioner : declaring queue for inbound: springCloudBus.anonymous.cBaG6SL1Qg64rsAotJWA4A, bound to: springCloudBus 2022-12-28 20:50:22.775 INFO 1 --- [ main] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [127.0.0.1:5672] 2022-12-28 20:50:22.781 INFO 1 --- [ main] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [127.0.0.1:5672] 2022-12-28 20:50:22.783 INFO 1 --- [ main] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [127.0.0.1:5672] 2022-12-28 20:50:22.801 INFO 1 --- [ main] o.s.c.stream.binder.BinderErrorChannel : Channel 'springCloudBus.anonymous.cBaG6SL1Qg64rsAotJWA4A.errors' has 1 subscriber(s). 2022-12-28 20:50:22.803 INFO 1 --- [ main] o.s.c.stream.binder.BinderErrorChannel : Channel 'springCloudBus.anonymous.cBaG6SL1Qg64rsAotJWA4A.errors' has 2 subscriber(s). 2022-12-28 20:50:22.804 INFO 1 --- [ main] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [127.0.0.1:5672] 2022-12-28 20:50:22.804 INFO 1 --- [ main] o.s.a.r.l.SimpleMessageListenerContainer : Broker not available; cannot force queue declarations during start: java.net.ConnectException: Connection refused 2022-12-28 20:50:22.814 INFO 1 --- [g64rsAotJWA4A-1] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [127.0.0.1:5672] 2022-12-28 20:50:22.815 INFO 1 --- [ main] o.s.i.a.i.AmqpInboundChannelAdapter : started bean 'inbound.springCloudBus.anonymous.cBaG6SL1Qg64rsAotJWA4A' 2022-12-28 20:50:22.931 INFO 1 --- [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port 8000 2022-12-28 20:50:22.934 INFO 1 --- [ main] .s.c.n.e.s.EurekaAutoServiceRegistration : Updating port to 8000 2022-12-28 20:50:22.969 INFO 1 --- [ main] c.a.ApiGatewayApplication : Started ApiGatewayApplication in 4.45 seconds (JVM running for 4.862) 2022-12-28 20:50:27.878 WARN 1 --- [g64rsAotJWA4A-1] o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it. Exception summary: org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused 2022-12-28 20:50:27.886 INFO 1 --- [g64rsAotJWA4A-1] o.s.a.r.l.SimpleMessageListenerContainer : Restarting Consumer@748d2277: tags=[[]], channel=null, acknowledgeMode=AUTO local queue size=0 2022-12-28 20:50:27.893 INFO 1 --- [g64rsAotJWA4A-2] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [127.0.0.1:5672] 2022-12-28 20:50:32.942 WARN 1 --- [g64rsAotJWA4A-2] o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it. Exception summary: org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused 2022-12-28 20:50:32.945 INFO 1 --- [g64rsAotJWA4A-2] o.s.a.r.l.SimpleMessageListenerContainer : Restarting Consumer@419e8443: tags=[[]], channel=null, acknowledgeMode=AUTO local queue size=0 2022-12-28 20:50:32.948 INFO 1 --- [g64rsAotJWA4A-3] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [127.0.0.1:5672] 2022-12-28 20:50:38.008 WARN 1 --- [g64rsAotJWA4A-3] o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it. Exception summary: org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused 2022-12-28 20:50:38.008 INFO 1 --- [g64rsAotJWA4A-3] o.s.a.r.l.SimpleMessageListenerContainer : Restarting Consumer@69ca4bfc: tags=[[]], channel=null, acknowledgeMode=AUTO local queue size=0 2022-12-28 20:50:38.009 INFO 1 --- [g64rsAotJWA4A-4] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [127.0.0.1:5672] 2022-12-28 20:50:43.062 WARN 1 --- [g64rsAotJWA4A-4] o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it. Exception summary: org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused 2022-12-28 20:50:43.064 INFO 1 --- [g64rsAotJWA4A-4] o.s.a.r.l.SimpleMessageListenerContainer : Restarting Consumer@3e206044: tags=[[]], channel=null, acknowledgeMode=AUTO local queue size=0 2022-12-28 20:50:43.068 INFO 1 --- [g64rsAotJWA4A-5] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [127.0.0.1:5672] 2022-12-28 20:50:48.142 WARN 1 --- [g64rsAotJWA4A-5] o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it. Exception summary: org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused 2022-12-28 20:50:48.142 INFO 1 --- [g64rsAotJWA4A-5] o.s.a.r.l.SimpleMessageListenerContainer : Restarting Consumer@46628e2f: tags=[[]], channel=null, acknowledgeMode=AUTO local queue size=0 2022-12-28 20:50:48.143 INFO 1 --- [g64rsAotJWA4A-6] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [127.0.0.1:5672] 2022-12-28 20:50:52.551 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Disable delta property : false 2022-12-28 20:50:52.551 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Single vip registry refresh property : null 2022-12-28 20:50:52.551 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Force full registry fetch : false 2022-12-28 20:50:52.551 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Application is null : false 2022-12-28 20:50:52.551 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Registered Applications size is zero : true 2022-12-28 20:50:52.552 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Application version is -1: true 2022-12-28 20:50:52.552 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Getting all instance registry info from the eureka server 2022-12-28 20:50:52.582 INFO 1 --- [tbeatExecutor-0] c.n.d.s.t.d.RedirectingEurekaHttpClient : Request execution error. endpoint=DefaultEndpoint{ serviceUrl='http://localhost:8761/eureka/}, exception=I/O error on PUT request for "http://localhost:8761/eureka/apps/API-GATEWAY-SERVER/30a30ddf419f:api-gateway-server:8000": Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused; nested exception is org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused stacktrace=org.springframework.web.client.ResourceAccessException: I/O error on PUT request for "http://localhost:8761/eureka/apps/API-GATEWAY-SERVER/30a30ddf419f:api-gateway-server:8000": Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused; nested exception is org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:785) at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:711) at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:602) at org.springframework.cloud.netflix.eureka.http.RestTemplateEurekaHttpClient.sendHeartBeat(RestTemplateEurekaHttpClient.java:99) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$3.execute(EurekaHttpClientDecorator.java:92) at com.netflix.discovery.shared.transport.decorator.RedirectingEurekaHttpClient.executeOnNewServer(RedirectingEurekaHttpClient.java:121) at com.netflix.discovery.shared.transport.decorator.RedirectingEurekaHttpClient.execute(RedirectingEurekaHttpClient.java:80) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.sendHeartBeat(EurekaHttpClientDecorator.java:89) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$3.execute(EurekaHttpClientDecorator.java:92) at com.netflix.discovery.shared.transport.decorator.RetryableEurekaHttpClient.execute(RetryableEurekaHttpClient.java:120) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.sendHeartBeat(EurekaHttpClientDecorator.java:89) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$3.execute(EurekaHttpClientDecorator.java:92) at com.netflix.discovery.shared.transport.decorator.SessionedEurekaHttpClient.execute(SessionedEurekaHttpClient.java:77) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.sendHeartBeat(EurekaHttpClientDecorator.java:89) at com.netflix.discovery.DiscoveryClient.renew(DiscoveryClient.java:893) at com.netflix.discovery.DiscoveryClient$HeartbeatThread.run(DiscoveryClient.java:1457) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:831) Caused by: org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) at org.springframework.http.client.HttpComponentsClientHttpRequest.executeInternal(HttpComponentsClientHttpRequest.java:87) at org.springframework.http.client.AbstractBufferingClientHttpRequest.executeInternal(AbstractBufferingClientHttpRequest.java:48) at org.springframework.http.client.AbstractClientHttpRequest.execute(AbstractClientHttpRequest.java:66) at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:776) ... 20 more Caused by: java.net.ConnectException: Connection refused at java.base/sun.nio.ch.Net.connect0(Native Method) at java.base/sun.nio.ch.Net.connect(Net.java:576) at java.base/sun.nio.ch.Net.connect(Net.java:565) at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:588) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:332) at java.base/java.net.Socket.connect(Socket.java:631) at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ... 33 more 2022-12-28 20:50:52.582 INFO 1 --- [freshExecutor-0] c.n.d.s.t.d.RedirectingEurekaHttpClient : Request execution error. endpoint=DefaultEndpoint{ serviceUrl='http://localhost:8761/eureka/}, exception=I/O error on GET request for "http://localhost:8761/eureka/apps/": Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused; nested exception is org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused stacktrace=org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://localhost:8761/eureka/apps/": Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused; nested exception is org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:785) at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:711) at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:602) at org.springframework.cloud.netflix.eureka.http.RestTemplateEurekaHttpClient.getApplicationsInternal(RestTemplateEurekaHttpClient.java:145) at org.springframework.cloud.netflix.eureka.http.RestTemplateEurekaHttpClient.getApplications(RestTemplateEurekaHttpClient.java:135) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$6.execute(EurekaHttpClientDecorator.java:137) at com.netflix.discovery.shared.transport.decorator.RedirectingEurekaHttpClient.executeOnNewServer(RedirectingEurekaHttpClient.java:121) at com.netflix.discovery.shared.transport.decorator.RedirectingEurekaHttpClient.execute(RedirectingEurekaHttpClient.java:80) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.getApplications(EurekaHttpClientDecorator.java:134) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$6.execute(EurekaHttpClientDecorator.java:137) at com.netflix.discovery.shared.transport.decorator.RetryableEurekaHttpClient.execute(RetryableEurekaHttpClient.java:120) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.getApplications(EurekaHttpClientDecorator.java:134) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$6.execute(EurekaHttpClientDecorator.java:137) at com.netflix.discovery.shared.transport.decorator.SessionedEurekaHttpClient.execute(SessionedEurekaHttpClient.java:77) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.getApplications(EurekaHttpClientDecorator.java:134) at com.netflix.discovery.DiscoveryClient.getAndStoreFullRegistry(DiscoveryClient.java:1101) at com.netflix.discovery.DiscoveryClient.fetchRegistry(DiscoveryClient.java:1014) at com.netflix.discovery.DiscoveryClient.refreshRegistry(DiscoveryClient.java:1531) at com.netflix.discovery.DiscoveryClient$CacheRefreshThread.run(DiscoveryClient.java:1498) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:831) Caused by: org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) at org.springframework.http.client.HttpComponentsClientHttpRequest.executeInternal(HttpComponentsClientHttpRequest.java:87) at org.springframework.http.client.AbstractBufferingClientHttpRequest.executeInternal(AbstractBufferingClientHttpRequest.java:48) at org.springframework.http.client.AbstractClientHttpRequest.execute(AbstractClientHttpRequest.java:66) at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:776) ... 23 more Caused by: java.net.ConnectException: Connection refused at java.base/sun.nio.ch.Net.connect0(Native Method) at java.base/sun.nio.ch.Net.connect(Net.java:576) at java.base/sun.nio.ch.Net.connect(Net.java:565) at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:588) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:332) at java.base/java.net.Socket.connect(Socket.java:631) at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ... 36 more 2022-12-28 20:50:52.582 WARN 1 --- [tbeatExecutor-0] c.n.d.s.t.d.RetryableEurekaHttpClient : Request execution failed with message: I/O error on PUT request for "http://localhost:8761/eureka/apps/API-GATEWAY-SERVER/30a30ddf419f:api-gateway-server:8000": Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused; nested exception is org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused 2022-12-28 20:50:52.582 WARN 1 --- [freshExecutor-0] c.n.d.s.t.d.RetryableEurekaHttpClient : Request execution failed with message: I/O error on GET request for "http://localhost:8761/eureka/apps/": Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused; nested exception is org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused 2022-12-28 20:50:52.582 INFO 1 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_API-GATEWAY-SERVER/30a30ddf419f:api-gateway-server:8000 - was unable to refresh its cache! This periodic background refresh will be retried in 30 seconds. status = Cannot execute request on any known server stacktrace = com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server at com.netflix.discovery.shared.transport.decorator.RetryableEurekaHttpClient.execute(RetryableEurekaHttpClient.java:112) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.getApplications(EurekaHttpClientDecorator.java:134) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$6.execute(EurekaHttpClientDecorator.java:137) at com.netflix.discovery.shared.transport.decorator.SessionedEurekaHttpClient.execute(SessionedEurekaHttpClient.java:77) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.getApplications(EurekaHttpClientDecorator.java:134) at com.netflix.discovery.DiscoveryClient.getAndStoreFullRegistry(DiscoveryClient.java:1101) at com.netflix.discovery.DiscoveryClient.fetchRegistry(DiscoveryClient.java:1014) at com.netflix.discovery.DiscoveryClient.refreshRegistry(DiscoveryClient.java:1531) at com.netflix.discovery.DiscoveryClient$CacheRefreshThread.run(DiscoveryClient.java:1498) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:831) 2022-12-28 20:50:52.582 ERROR 1 --- [tbeatExecutor-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_API-GATEWAY-SERVER/30a30ddf419f:api-gateway-server:8000 - was unable to send heartbeat! com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server at com.netflix.discovery.shared.transport.decorator.RetryableEurekaHttpClient.execute(RetryableEurekaHttpClient.java:112) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.sendHeartBeat(EurekaHttpClientDecorator.java:89) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$3.execute(EurekaHttpClientDecorator.java:92) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.shared.transport.decorator.SessionedEurekaHttpClient.execute(SessionedEurekaHttpClient.java:77) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.sendHeartBeat(EurekaHttpClientDecorator.java:89) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.DiscoveryClient.renew(DiscoveryClient.java:893) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.DiscoveryClient$HeartbeatThread.run(DiscoveryClient.java:1457) ~[eureka-client-1.10.17.jar!/:1.10.17] at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[na:na] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135) ~[na:na] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[na:na] at java.base/java.lang.Thread.run(Thread.java:831) ~[na:na] 2022-12-28 20:50:52.624 INFO 1 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_API-GATEWAY-SERVER/30a30ddf419f:api-gateway-server:8000: registering service... 2022-12-28 20:50:52.636 INFO 1 --- [nfoReplicator-0] c.n.d.s.t.d.RedirectingEurekaHttpClient : Request execution error. endpoint=DefaultEndpoint{ serviceUrl='http://localhost:8761/eureka/}, exception=I/O error on POST request for "http://localhost:8761/eureka/apps/API-GATEWAY-SERVER": Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused; nested exception is org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused stacktrace=org.springframework.web.client.ResourceAccessException: I/O error on POST request for "http://localhost:8761/eureka/apps/API-GATEWAY-SERVER": Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused; nested exception is org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:785) at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:711) at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:602) at org.springframework.cloud.netflix.eureka.http.RestTemplateEurekaHttpClient.register(RestTemplateEurekaHttpClient.java:77) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$1.execute(EurekaHttpClientDecorator.java:59) at com.netflix.discovery.shared.transport.decorator.RedirectingEurekaHttpClient.executeOnNewServer(RedirectingEurekaHttpClient.java:121) at com.netflix.discovery.shared.transport.decorator.RedirectingEurekaHttpClient.execute(RedirectingEurekaHttpClient.java:80) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.register(EurekaHttpClientDecorator.java:56) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$1.execute(EurekaHttpClientDecorator.java:59) at com.netflix.discovery.shared.transport.decorator.RetryableEurekaHttpClient.execute(RetryableEurekaHttpClient.java:120) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.register(EurekaHttpClientDecorator.java:56) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$1.execute(EurekaHttpClientDecorator.java:59) at com.netflix.discovery.shared.transport.decorator.SessionedEurekaHttpClient.execute(SessionedEurekaHttpClient.java:77) at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.register(EurekaHttpClientDecorator.java:56) at com.netflix.discovery.DiscoveryClient.register(DiscoveryClient.java:876) at com.netflix.discovery.InstanceInfoReplicator.run(InstanceInfoReplicator.java:121) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:831) Caused by: org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) at org.springframework.http.client.HttpComponentsClientHttpRequest.executeInternal(HttpComponentsClientHttpRequest.java:87) at org.springframework.http.client.AbstractBufferingClientHttpRequest.executeInternal(AbstractBufferingClientHttpRequest.java:48) at org.springframework.http.client.AbstractClientHttpRequest.execute(AbstractClientHttpRequest.java:66) at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:776) ... 21 more Caused by: java.net.ConnectException: Connection refused at java.base/sun.nio.ch.Net.connect0(Native Method) at java.base/sun.nio.ch.Net.connect(Net.java:576) at java.base/sun.nio.ch.Net.connect(Net.java:565) at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:588) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:332) at java.base/java.net.Socket.connect(Socket.java:631) at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ... 34 more 2022-12-28 20:50:52.637 WARN 1 --- [nfoReplicator-0] c.n.d.s.t.d.RetryableEurekaHttpClient : Request execution failed with message: I/O error on POST request for "http://localhost:8761/eureka/apps/API-GATEWAY-SERVER": Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused; nested exception is org.apache.http.conn.HttpHostConnectException: Connect to localhost:8761 [localhost/127.0.0.1] failed: Connection refused 2022-12-28 20:50:52.637 WARN 1 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_API-GATEWAY-SERVER/30a30ddf419f:api-gateway-server:8000 - registration failed Cannot execute request on any known server com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server at com.netflix.discovery.shared.transport.decorator.RetryableEurekaHttpClient.execute(RetryableEurekaHttpClient.java:112) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.register(EurekaHttpClientDecorator.java:56) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$1.execute(EurekaHttpClientDecorator.java:59) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.shared.transport.decorator.SessionedEurekaHttpClient.execute(SessionedEurekaHttpClient.java:77) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.register(EurekaHttpClientDecorator.java:56) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.DiscoveryClient.register(DiscoveryClient.java:876) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.InstanceInfoReplicator.run(InstanceInfoReplicator.java:121) ~[eureka-client-1.10.17.jar!/:1.10.17] at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[na:na] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na] at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) ~[na:na] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135) ~[na:na] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[na:na] at java.base/java.lang.Thread.run(Thread.java:831) ~[na:na] 2022-12-28 20:50:52.638 WARN 1 --- [nfoReplicator-0] c.n.discovery.InstanceInfoReplicator : There was a problem with the instance info replicator com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server at com.netflix.discovery.shared.transport.decorator.RetryableEurekaHttpClient.execute(RetryableEurekaHttpClient.java:112) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.register(EurekaHttpClientDecorator.java:56) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$1.execute(EurekaHttpClientDecorator.java:59) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.shared.transport.decorator.SessionedEurekaHttpClient.execute(SessionedEurekaHttpClient.java:77) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.register(EurekaHttpClientDecorator.java:56) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.DiscoveryClient.register(DiscoveryClient.java:876) ~[eureka-client-1.10.17.jar!/:1.10.17] at com.netflix.discovery.InstanceInfoReplicator.run(InstanceInfoReplicator.java:121) ~[eureka-client-1.10.17.jar!/:1.10.17] at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[na:na] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na] at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) ~[na:na] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135) ~[na:na] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[na:na] at java.base/java.lang.Thread.run(Thread.java:831) ~[na:na] 2022-12-28 20:50:53.205 WARN 1 --- [g64rsAotJWA4A-6] o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it. Exception summary: org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused 2022-12-28 20:50:53.206 INFO 1 --- [g64rsAotJWA4A-6] o.s.a.r.l.SimpleMessageListenerContainer : Restarting Consumer@55a62eea: tags=[[]], channel=null, acknowledgeMode=AUTO local queue size=0 2022-12-28 20:50:53.209 INFO 1 --- [g64rsAotJWA4A-7] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [127.0.0.1:5672] 2022-12-28 20:50:58.260 WARN 1 --- [g64rsAotJWA4A-7] o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it. Exception summary: org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused 2022-12-28 20:50:58.262 INFO 1 --- [g64rsAotJWA4A-7] o.s.a.r.l.SimpleMessageListenerContainer : Restarting Consumer@1f16b696: tags=[[]], channel=null, acknowledgeMode=AUTO local queue size=0 2022-12-28 20:50:58.267 INFO 1 --- [g64rsAotJWA4A-8] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [127.0.0.1:5672] 2022-12-28 20:51:03.361 WARN 1 --- [g64rsAotJWA4A-8] o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it. Exception summary: org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused 2022-12-28 20:51:03.362 INFO 1 --- [g64rsAotJWA4A-8] o.s.a.r.l.SimpleMessageListenerContainer : Restarting Consumer@2193ccd1: tags=[[]], channel=null, acknowledgeMode=AUTO local queue size=0 2022-12-28 20:51:03.362 INFO 1 --- [g64rsAotJWA4A-9] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [127.0.0.1:5672]
- 미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
8분 41초부터 Kafka Connect 실행이 안됨
제가 그대로 따라해서 하고 있는데... 왜 안되는지 잘 모르겟네요... 저는 D 드라이브에 별도로 폴더를 생성해서 작업하고 있었는데요, .\bin\windows\connect-distributed.bat .\etc\kafka\connect-distributed.properties 를 실행하면 Error: Could not find or load main class org.apache.kafka.connect.cli.ConnectDistributedCaused by: java.lang.ClassNotFoundException: org.apache.kafka.connect.cli.ConnectDistributed 위와 같은 에러가 나오니... 이유가 뭘까요?
- 미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
user-service h2-console연결이 안됨
위 사진을 보면 user-service가 분명 떠잇어서 들어가서... 설정한 데로 접속을 하는데 안되는겁니다...근데 왜 안되는지 모르겟네요...user-service project에 application.yml 파일 설정도 동일하게 해준거 같은데 애초에 비밀번호가 맞고 안맞고를 떠나서 not found 가 떠버리는데... 무엇을 놓친지 모르겟습니다. server: port: 0 spring: config: import: - classpath:/bootstrap.yml application: name: user-service rabbitmq: host: 127.0.0.1 port: 5672 username: guest password: guest h2: console: enabled: true settings: web-allow-others: true path: /h2-console datasource: driver-class-name: org.h2.Driver url: jdbc:h2:mem:testdb username: sa password: jpa: database-platform: org.hibernate.dialect.H2Dialect open-in-view: false hibernate: ddl-auto: create-drop properties: hibernate: # show_sql: true format_sql: true eureka: instance: instance-id: ${spring.application.name}:${spring.application.instance_id:${random.value}} client: register-with-eureka: true fetch-registry: true service-url: defaultZone: http://127.0.0.1:8761/eureka greeting: message: Welcome to the Simple E-commerce. logging: level: com.example.userservice.client: DEBUG management: endpoints: web: exposure: include: refresh, health, beans, busrefresh #token: # expiration_time: 86400000 # secret: user_token
- 미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
org.eclipse.jgit.api.errors.TransportException cannot open git-upload-pack
config server를 docker로 작성하다 보면 docker network create로 bridge network을 생성하여 사용하는데 이때docker 내부에서 172. 대역을 기본으로 사용한다는 것을 몰랐던 상태에서저의 경우 192.168.x.x대역으로 gateway와 subnet을 구성했었습니다그러다 보니 git에 올려놓은 config용 yml파일을 가져와야 하는데 아래와 같은 메시지가 계속 발생하더라구요.org.eclipse.jgit.api.errors.TransportException.......cannot open git-upload-pack자세히 보면 그 아래 Caused by: java.net.UnknownHostException: github.com호스트를 못찾겠다는 메시지가 있는데 네트웍 설정으로 인한 외부 통신이 불가한것을 착안하게 되어다시 선생님 강의와 같이 172 대역으로 다시 교정하니 정상작동이 되었습니다며칠간 잘 되지 않아 고생했는데 다른분들 이런 오류 보게되면 네트웍 설정을 잘 살펴보시면 될것 같습니다
- 미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
게이트웨이 오류
안녕하세요 유레카에 랜덤 포트로 first-service 2개 등록을 했습니다. - Postman 이용해서 api를 날려보면 Connection refused가 발생하는데 어디를 고쳐야할까요?게이트웨이의 application.yml은 위와 같습니다.eureka yml은 위와 같습니다.First-service.yml입니다.
- 미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
RandomPort
랜덤 포트를 설정해도 Port가 0번으로 고정이 됩니다.한 개는 Intellij에서 Run 버튼을 이용해서 실행 시켰고한 개는 ./gradlew build 후 Intellij의 terminal에서java -jar first.jar를 이용해서 실행을 시켰습니다.instance-id도 추가를 해주었습니다. 혹시 몰라서 노트북 재시작까지 해보았는데도Eureka에서는 Port가 0번으로 고정이됩니다. 위의 사진은 Eureka 서버의 application.yml입니다.
- 미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
Eureka LoadBalancing
안녕하세요 질문이 3가지가 있습니다.다음과 같이 Eureka에 서버가 2개 등록되어있으면 유레카에서 로드밸런싱을 해주는 건가요? Client -> GateWay로 요청 보냄 -> Eureka에 서버 정보 요청 -> Eureka는 Application name에 여러 개의 IP가 있을 시 로드밸런싱을 해서 GateWay에 서버 정보를 알려줌 -> Gateway가 Eureka에서 얻은 서버 정보로 요청을 보냄이 과정을 맞을까요?위와 같이 8081, 9091이 등록이 되어있지만 Gateway를 호출을 해보면 8081만 나오게 됩니다.실행 시킨 방법은 8081은 Intellij에서 Run 버튼을 이용해서 실행 시켰고 9091은 Edit Configuration을 이용해서 -Dserver.port=9091을 이용해서 실행시켰습니다.게이트웨이의 application.yml은 다음과 같습니다.first-service의 application.yml은 다음과 같습니다.8081은 잘 찍히는데9091은 잘 나오지 않습니다.application.yml에 설정된 값이 8081로 되어있어서 8081만 나오게 되는걸까요?
- 미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
filter가 작동되지 않습니다.
- 강의에 있는 거 그대로 사용했는데 log가 뜨지를 않습니다. 디버그를 찍어보니 필터를 거치지 않는데 어떻게 해결을 해야될까요??- 요청은 정상적으로 가고 응답도 first-service / second-serivIce에 맞게 잘 가져옵니다.
- 미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
RoutLocatortBuilder
안녕하세요 RouteLocateBuilder에 빨간 줄이 뜨는데무시하고 해도 되는건가요?
- 미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
9003 포트로 띄웠는데 9001 포트도 같이 건드려지는건 왜 일까요?
터미널에서 아래와 같이 명령어를 주었습니다.mvn spring-boot:run -Dspring-boot.run.jvmArguments='-Dserver.prot=9003’그런데 9001 포트까지 같이 실행되서 중복 포트라 fail이 되어버리네요.default 9001 포트를 변경해줘야 할까요?
- 미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
zuul 강의에서 gradle로 하시는 분들을 위해 공유합니다
zuul을 사용하기 위해서는 spring boot 버전이 2.4보다 낮아야 한다고 해서 2.3.9.RELEASE를 사용했습니다. (2.4 미만 버전은 RELEASE가 붙더라고요)또한, spring boot 버전이 2.4 미만일 경우 spring cloud 2020.0.0 (2.4.x, 2.5.x 에서만 사용 가능) 버전을 사용할 수 없고 Hoxton 버전을 사용해야 됨을 적용했습니다.아래와 같이 해서 실행 성공했습니다.plugins { id 'java' id 'org.springframework.boot' version '2.3.9.RELEASE' id 'io.spring.dependency-management' version '1.0.11.RELEASE' } group = 'com.example' version = '0.0.1-SNAPSHOT' sourceCompatibility = '11' configurations { compileOnly { extendsFrom annotationProcessor } } repositories { mavenCentral() } dependencies { implementation 'org.springframework.boot:spring-boot-starter-web' implementation 'org.springframework.cloud:spring-cloud-starter-netflix-eureka-client' compileOnly 'org.projectlombok:lombok' annotationProcessor 'org.projectlombok:lombok' testImplementation 'org.springframework.boot:spring-boot-starter-test' } dependencyManagement { imports { mavenBom "org.springframework.cloud:spring-cloud-dependencies:Hoxton.RELEASE" } } tasks.named('test') { useJUnitPlatform() }
- 미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
카프카 sink connector 사용시 에러
안녕하세요. 카프카 관련 수업 듣는 중 오류가 발생하여 질문 드립니다.org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:206)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:132)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:501)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:478)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:328)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:232)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.base/java.lang.Thread.run(Thread.java:833)\nCaused by: org.apache.kafka.connect.errors.DataException: Converting byte[] to Kafka Connect data failed due to serialization error: \n\tat org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:366)\n\tat org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:87)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertValue(WorkerSinkTask.java:545)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:501)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)\n\t... 13 more\nCaused by: org.apache.kafka.common.errors.SerializationException: com.fasterxml.jackson.core.JsonParseException: Unexpected character (':' (code 58)): was expecting comma to separate Object entries\n at [Source: (byte[])\"{\"schema\":{\"type\":\"struct\",\"fields\":[{\"type\":\"int32\",\"optional\":false,\"field\":\"id\"},{\"type\":\"string\",\"optional\":true,\"field\":\"user_id\"},{\"type\":\"string\",\"optional\":true,\"field\":\"pwd\"},{\"type\":\"string\",\"optional\":true,\"field\":\"name\"},{\"type\":\"int64\",\"optional\":true,\"name\":\"org.apache.kafka.connect.data.Timestamp\",\"version\":1,\"field\":\"created_at\"}],\"optional\":false,\"name\":\"users\"},\"payload\":{\"id”:4,”user_id\":\"user4”,”pwd\":\"1234\",\"name\":\"username4”,”created_at\":1671277849000}}\"; line: 1, column: 433]\nCaused by: com.fasterxml.jackson.core.JsonParseException: Unexpected character (':' (code 58)): was expecting comma to separate Object entries\n at [Source: (byte[])\"{\"schema\":{\"type\":\"struct\",\"fields\":[{\"type\":\"int32\",\"optional\":false,\"field\":\"id\"},{\"type\":\"string\",\"optional\":true,\"field\":\"user_id\"},{\"type\":\"string\",\"optional\":true,\"field\":\"pwd\"},{\"type\":\"string\",\"optional\":true,\"field\":\"name\"},{\"type\":\"int64\",\"optional\":true,\"name\":\"org.apache.kafka.connect.data.Timestamp\",\"version\":1,\"field\":\"created_at\"}],\"optional\":false,\"name\":\"users\"},\"payload\":{\"id”:4,”user_id\":\"user4”,”pwd\":\"1234\",\"name\":\"username4”,”created_at\":1671277849000}}\"; line: 1, column: 433]\n\tat com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1840)\n\tat com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:712)\n\tat com.fasterxml.jackson.core.base.ParserMinimalBase._reportUnexpectedChar(ParserMinimalBase.java:637)\n\tat com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextFieldName(UTF8StreamJsonParser.java:1011)\n\tat com.fasterxml.jackson.databind.deser.std.BaseNodeDeserializer.deserializeObject(JsonNodeDeserializer.java:250)\n\tat com.fasterxml.jackson.databind.deser.std.BaseNodeDeserializer.deserializeObject(JsonNodeDeserializer.java:258)\n\tat com.fasterxml.jackson.databind.deser.std.JsonNodeDeserializer.deserialize(JsonNodeDeserializer.java:68)\n\tat com.fasterxml.jackson.databind.deser.std.JsonNodeDeserializer.deserialize(JsonNodeDeserializer.java:15)\n\tat com.fasterxml.jackson.databind.ObjectMapper._readTreeAndClose(ObjectMapper.java:4270)\n\tat com.fasterxml.jackson.databind.ObjectMapper.readTree(ObjectMapper.java:2734)\n\tat org.apache.kafka.connect.json.JsonDeserializer.deserialize(JsonDeserializer.java:64)\n\tat org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:364)\n\tat org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:87)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertValue(WorkerSinkTask.java:545)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:501)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:132)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:501)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:478)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:328)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:232)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.base/java.lang.Thread.run(Thread.java:833)\n발생 에러는 다음과 같은데,터미널에서 직접 produce 하는 도중 발생하였는데 payload 의 데이터가 잘못 전송 되어서 해당 오류가 발생 한 것 같아서 기존 users 테이블에 다시 insert 하는 방식으로 다시 사용했을 때 이전에 실패한 task 가 남아있어서 여전히 my_topic_users 테이블에 insert 되지 않았습니다.중간에 발생한 task 는 삭제하거나 임의로 건너 뛰거나 할 수 없는 것인가요?
- 미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
인증에 대해서 질문드립니다.
안녕하세요 좋은 강의 감사드립니다.첫번째 질문 내용은 JWT로 로그인 인증 처리기능에 Refresh Token 기능을 추가하고 싶어서 기능 흐름을GateWay 필터에서 AUTHORIZATION 헤더에 토큰이 만료가 되면 user-service에 토큰 갱신 요청을 보내고 응답 받은 갱신된 accessToken과 refresh토큰을 client로 자동적으로 응답하려고 합니다.Refresh토큰을 적용했을 때 기능흐름이 이렇게 가는것이 맞을까요?두번째 질문은 GateWay 필터에서 accessToken이 만료되고 user-service에 토큰 갱신 요청에 응답을 받을때 Cloud는 비동기 방식이라 WebCliet를 사용해서 user-service에 요청을 보내는대 응답 받은 토큰 값을.block()을 이용해서 받으면 블록킹 코드를 사용할 수 없는 쓰레드란 에러가 뜹니다. 이럴때는 어떤 방식으로 인증서버에 리프레시 토큰 갱신 요청을 하는지 궁금합니다. 감사합니다.
- 미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
애플리케이션 구성관련하여 질문드립니다.
안녕하세요. 회사에서 사용하는 MSA가 무엇인지 궁금하여 알아보던 중 강의를 듣게 되었습니다..앞 강의에서 강사님이 로드밸런싱은 Eureka가, 라우팅은 API GATEWAY가 한다고 보면 된다.. 라고 봤던 것 같은데 이번 강의에서는 API Gteway에 "부하분산/서비스라우팅"이라고 적혀있습니다. 부하분산=로드밸런싱으로 알고 있는데.. 혹시 제가 잘 못 알고 있던 걸까요? 감사합니다.
- 미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
강의 PPT 자료는 어디서 다운받을수 있나요?
안녕하세요.강의 PPT 자료를 찾아봤는데 없어서 문의드립니다. 현재 다운받을수 없는건가요?
- 미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
모듈, 배포
안녕하세요. 강의 초반부를 진행하며 궁금한 점에 대한 질문입니다. 1) 강의 실습 시 여러 개의 프로젝트를 생성하여 진행하셨는데, IntelliJ IDEA 에 있는 한 프로젝트 안에 여러 개의 모듈을 추가하는 기능을 사용해도 문제가 없나요?2) AWS로 배포한다고 가정했을 때 서비스마다 하나의 EC2 인스턴스가 사용되나요? 3) 자원 사용량을 감안해도 비용이 증가할 것 같은데 실제로 어떤가요?
- 미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
Spring Cloud Stream
안녕하세요,강의 잘 보고있습니다! 덕분에 손쉽게 스프링 클라우드에 적응할 수 있게되었어요:)강의하신 내용과는 조금 다른 방법이지만 혹시나 해답을 얻을 수 있을지 질문드립니다. 구글링을 해봤는데 명확하지가 않아서요.Apache Kafka와 Kafka Sink Connector로 마이크로서비스 동기화를 처리하셨는데요, 혹시 Spring Cloud Stream로 전부 대체할 수 있을지 여쭤보고 싶습니다. 우선 마이크로서비스간의 통신은 Spring Cloud Stream 2.0, 3.0 이상부터 지원하는 함수형 프로그래밍과 StreamBridge로 비교적 쉽게 환경을 구축할 수 있었는데요, 단일 DB 사용에 대한 동기화를 처리하는 Kafka Sink Connector의 기능도 Spring Cloud Stream 또는 다른 Spring Cloud 기반의 서비스로 대체 가능한지 여쭤봅니다.감사합니다.
- 미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
netty-resolver-dns-nativce-macos error
안녕하세요! 코딩을 하다가 계속 같은 오류가 뜨는 데 어떻게 해결해야할 지 막막해 질문드립니다. 코드 오류 : 2022-12-05 15:01:05.478 ERROR 1563 --- [ctor-http-nio-3] i.n.r.d.DnsServerAddressStreamProviders : Unable to load io.netty.resolver.dns.macos.MacOSDnsServerAddressStreamProvider, fallback to system defaults. This may result in incorrect DNS resolutions on MacOS. Check whether you have a dependency on 'io.netty:netty-resolver-dns-native-macos'. Use DEBUG level to see the full stack: java.lang.UnsatisfiedLinkError: failed to load the required native library코드 오류는 다음과 같고 https://github.com/netty/netty/issues/11020 을 참고하여pom.xml에 이 dependency를 추가였지만 계속 같은 오류가 발생합니다. 제가 더 시도할 수 있는 방안이 있을까요? 노트북 : mac air m1netty : 4.1.85.Finalspring-boot : 2.7.6
- 미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
fallbackuri 로 설정한 API로 요청 받았을 경우, client로 reponse를 customize 해서 보낼수 있나요?
fallbackuri 로 설정한 API로 요청 받았을 경우, client로 reponse를 customize 해서 보낼수 있나요?redirect 코드로 해서 전달이 가능할까요?
- 미해결Spring Cloud로 개발하는 마이크로서비스 애플리케이션(MSA)
confluent 문제입니다. mysql로 했습니다.
localhost:8083/connectors를 쳤을 경우에 발생합니다..\bin\windows\connect-distributed.bat .\etc\kafka\connect-distributed.properties[2022-12-03 17:23:18,033] INFO WorkerInfo values: jvm.args = -Xmx256M, -XX:+UseG1GC, -XX:MaxGCPauseMillis=20, -XX:InitiatingHeapOccupancyPercent=35, -XX:+ExplicitGCInvokesConcurrent, -Djava.awt.headless=true, -Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -Dkafka.logs.dir=C:\confluent-7.2.2/logs, -Dlog4j.configuration=file:C:\confluent-7.2.2/etc/kafka/connect-log4j.properties jvm.spec = Oracle Corporation, OpenJDK 64-Bit Server VM, 18.0.1.1, 18.0.1.1+2-6 jvm.classpath = C:\confluent-7.2.2\share\java\kafka\activation-1.1.1.jar;C:\confluent-7.2.2\share\java\kafka\aopalliance-repackaged-2.6.1.jar;C:\confluent-7.2.2\share\java\kafka\argparse4j-0.7.0.jar;C:\confluent-7.2.2\share\java\kafka\audience-annotations-0.5.0.jar;C:\confluent-7.2.2\share\java\kafka\commons-cli-1.4.jar;C:\confluent-7.2.2\share\java\kafka\commons-lang3-3.8.1.jar;C:\confluent-7.2.2\share\java\kafka\connect-api-7.2.2-ccs.jar;C:\confluent-7.2.2\share\java\kafka\connect-basic-auth-extension-7.2.2-ccs.jar;C:\confluent-7.2.2\share\java\kafka\connect-json-7.2.2-ccs.jar;C:\confluent-7.2.2\share\java\kafka\connect-mirror-7.2.2-ccs.jar;C:\confluent-7.2.2\share\java\kafka\connect-mirror-client-7.2.2-ccs.jar;C:\confluent-7.2.2\share\java\kafka\connect-runtime-7.2.2-ccs.jar;C:\confluent-7.2.2\share\java\kafka\connect-transforms-7.2.2-ccs.jar;C:\confluent-7.2.2\share\java\kafka\hk2-api-2.6.1.jar;C:\confluent-7.2.2\share\java\kafka\hk2-locator-2.6.1.jar;C:\confluent-7.2.2\share\java\kafka\hk2-utils-2.6.1.jar;C:\confluent-7.2.2\share\java\kafka\jackson-annotations-2.13.3.jar;C:\confluent-7.2.2\share\java\kafka\jackson-core-2.13.3.jar;C:\confluent-7.2.2\share\java\kafka\jackson-databind-2.13.3.jar;C:\confluent-7.2.2\share\java\kafka\jackson-dataformat-csv-2.13.3.jar;C:\confluent-7.2.2\share\java\kafka\jackson-datatype-jdk8-2.13.3.jar;C:\confluent-7.2.2\share\java\kafka\jackson-jaxrs-base-2.13.3.jar;C:\confluent-7.2.2\share\java\kafka\jackson-jaxrs-json-provider-2.13.3.jar;C:\confluent-7.2.2\share\java\kafka\jackson-module-jaxb-annotations-2.13.3.jar;C:\confluent-7.2.2\share\java\kafka\jackson-module-scala_2.13-2.13.3.jar;C:\confluent-7.2.2\share\java\kafka\jakarta.activation-api-1.2.2.jar;C:\confluent-7.2.2\share\java\kafka\jakarta.annotation-api-1.3.5.jar;C:\confluent-7.2.2\share\java\kafka\jakarta.inject-2.6.1.jar;C:\confluent-7.2.2\share\java\kafka\jakarta.validation-api-2.0.2.jar;C:\confluent-7.2.2\share\java\kafka\jakarta.ws.rs-api-2.1.6.jar;C:\confluent-7.2.2\share\java\kafka\jakarta.xml.bind-api-2.3.3.jar;C:\confluent-7.2.2\share\java\kafka\javassist-3.27.0-GA.jar;C:\confluent-7.2.2\share\java\kafka\javax.servlet-api-3.1.0.jar;C:\confluent-7.2.2\share\java\kafka\javax.ws.rs-api-2.1.1.jar;C:\confluent-7.2.2\share\java\kafka\jaxb-api-2.3.0.jar;C:\confluent-7.2.2\share\java\kafka\jersey-client-2.34.jar;C:\confluent-7.2.2\share\java\kafka\jersey-common-2.34.jar;C:\confluent-7.2.2\share\java\kafka\jersey-container-servlet-2.34.jar;C:\confluent-7.2.2\share\java\kafka\jersey-container-servlet-core-2.34.jar;C:\confluent-7.2.2\share\java\kafka\jersey-hk2-2.34.jar;C:\confluent-7.2.2\share\java\kafka\jersey-server-2.34.jar;C:\confluent-7.2.2\share\java\kafka\jetty-client-9.4.48.v20220622.jar;C:\confluent-7.2.2\share\java\kafka\jetty-continuation-9.4.48.v20220622.jar;C:\confluent-7.2.2\share\java\kafka\jetty-http-9.4.48.v20220622.jar;C:\confluent-7.2.2\share\java\kafka\jetty-io-9.4.48.v20220622.jar;C:\confluent-7.2.2\share\java\kafka\jetty-security-9.4.48.v20220622.jar;C:\confluent-7.2.2\share\java\kafka\jetty-server-9.4.48.v20220622.jar;C:\confluent-7.2.2\share\java\kafka\jetty-servlet-9.4.48.v20220622.jar;C:\confluent-7.2.2\share\java\kafka\jetty-servlets-9.4.48.v20220622.jar;C:\confluent-7.2.2\share\java\kafka\jetty-util-9.4.48.v20220622.jar;C:\confluent-7.2.2\share\java\kafka\jetty-util-ajax-9.4.48.v20220622.jar;C:\confluent-7.2.2\share\java\kafka\jline-3.21.0.jar;C:\confluent-7.2.2\share\java\kafka\jopt-simple-5.0.4.jar;C:\confluent-7.2.2\share\java\kafka\jose4j-0.7.9.jar;C:\confluent-7.2.2\share\java\kafka\kafka-clients-7.2.2-ccs.jar;C:\confluent-7.2.2\share\java\kafka\kafka-log4j-appender-7.2.2-ccs.jar;C:\confluent-7.2.2\share\java\kafka\kafka-metadata-7.2.2-ccs.jar;C:\confluent-7.2.2\share\java\kafka\kafka-raft-7.2.2-ccs.jar;C:\confluent-7.2.2\share\java\kafka\kafka-server-common-7.2.2-ccs.jar;C:\confluent-7.2.2\share\java\kafka\kafka-shell-7.2.2-ccs.jar;C:\confluent-7.2.2\share\java\kafka\kafka-storage-7.2.2-ccs.jar;C:\confluent-7.2.2\share\java\kafka\kafka-storage-api-7.2.2-ccs.jar;C:\confluent-7.2.2\share\java\kafka\kafka-streams-7.2.2-ccs.jar;C:\confluent-7.2.2\share\java\kafka\kafka-streams-examples-7.2.2-ccs.jar;C:\confluent-7.2.2\share\java\kafka\kafka-streams-scala_2.13-7.2.2-ccs.jar;C:\confluent-7.2.2\share\java\kafka\kafka-streams-test-utils-7.2.2-ccs.jar;C:\confluent-7.2.2\share\java\kafka\kafka-tools-7.2.2-ccs.jar;C:\confluent-7.2.2\share\java\kafka\kafka.jar;C:\confluent-7.2.2\share\java\kafka\kafka_2.13-7.2.2-ccs.jar;C:\confluent-7.2.2\share\java\kafka\lz4-java-1.8.0.jar;C:\confluent-7.2.2\share\java\kafka\maven-artifact-3.8.4.jar;C:\confluent-7.2.2\share\java\kafka\metrics-core-2.2.0.jar;C:\confluent-7.2.2\share\java\kafka\metrics-core-4.1.12.1.jar;C:\confluent-7.2.2\share\java\kafka\mysql-connector-java-8.0.28.jar;C:\confluent-7.2.2\share\java\kafka\netty-buffer-4.1.79.Final.jar;C:\confluent-7.2.2\share\java\kafka\netty-codec-4.1.79.Final.jar;C:\confluent-7.2.2\share\java\kafka\netty-common-4.1.79.Final.jar;C:\confluent-7.2.2\share\java\kafka\netty-handler-4.1.79.Final.jar;C:\confluent-7.2.2\share\java\kafka\netty-resolver-4.1.79.Final.jar;C:\confluent-7.2.2\share\java\kafka\netty-transport-4.1.79.Final.jar;C:\confluent-7.2.2\share\java\kafka\netty-transport-classes-epoll-4.1.79.Final.jar;C:\confluent-7.2.2\share\java\kafka\netty-transport-native-epoll-4.1.79.Final.jar;C:\confluent-7.2.2\share\java\kafka\netty-transport-native-unix-common-4.1.79.Final.jar;C:\confluent-7.2.2\share\java\kafka\osgi-resource-locator-1.0.3.jar;C:\confluent-7.2.2\share\java\kafka\paranamer-2.8.jar;C:\confluent-7.2.2\share\java\kafka\plexus-utils-3.3.0.jar;C:\confluent-7.2.2\share\java\kafka\reflections-0.9.12.jar;C:\confluent-7.2.2\share\java\kafka\reload4j-1.2.19.jar;C:\confluent-7.2.2\share\java\kafka\rocksdbjni-6.29.4.1.jar;C:\confluent-7.2.2\share\java\kafka\scala-collection-compat_2.13-2.6.0.jar;C:\confluent-7.2.2\share\java\kafka\scala-java8-compat_2.13-1.0.2.jar;C:\confluent-7.2.2\share\java\kafka\scala-library-2.13.8.jar;C:\confluent-7.2.2\share\java\kafka\scala-logging_2.13-3.9.4.jar;C:\confluent-7.2.2\share\java\kafka\scala-reflect-2.13.8.jar;C:\confluent-7.2.2\share\java\kafka\slf4j-api-1.7.36.jar;C:\confluent-7.2.2\share\java\kafka\slf4j-reload4j-1.7.36.jar;C:\confluent-7.2.2\share\java\kafka\snappy-java-1.1.8.4.jar;C:\confluent-7.2.2\share\java\kafka\trogdor-7.2.2-ccs.jar;C:\confluent-7.2.2\share\java\kafka\zookeeper-3.6.3.jar;C:\confluent-7.2.2\share\java\kafka\zookeeper-jute-3.6.3.jar;C:\confluent-7.2.2\share\java\kafka\zstd-jni-1.5.2-1.jar os.spec = Windows 11, amd64, 10.0 os.vcpus = 8 (org.apache.kafka.connect.runtime.WorkerInfo:71)[2022-12-03 17:23:18,033] INFO Scanning for plugin classes. This might take a moment ... (org.apache.kafka.connect.cli.ConnectDistributed:92)[2022-12-03 17:23:18,050] INFO Loading plugin from: C:\confluentinc-kafka-connect-jdbc-10.6.0\lib\checker-qual-3.5.0.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:277)[2022-12-03 17:23:18,114] INFO Registered loader: PluginClassLoader{pluginLocation=file:/C:/confluentinc-kafka-connect-jdbc-10.6.0/lib/checker-qual-3.5.0.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:299)[2022-12-03 17:23:18,114] INFO Added plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:18,114] INFO Added plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:18,114] INFO Added plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:18,130] INFO Loading plugin from: C:\confluentinc-kafka-connect-jdbc-10.6.0\lib\common-utils-6.0.0.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:277)[2022-12-03 17:23:18,210] INFO Registered loader: PluginClassLoader{pluginLocation=file:/C:/confluentinc-kafka-connect-jdbc-10.6.0/lib/common-utils-6.0.0.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:299)[2022-12-03 17:23:18,210] INFO Loading plugin from: C:\confluentinc-kafka-connect-jdbc-10.6.0\lib\jtds-1.3.1.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:277)[2022-12-03 17:23:18,242] INFO Registered loader: PluginClassLoader{pluginLocation=file:/C:/confluentinc-kafka-connect-jdbc-10.6.0/lib/jtds-1.3.1.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:299)[2022-12-03 17:23:18,242] INFO Loading plugin from: C:\confluentinc-kafka-connect-jdbc-10.6.0\lib\kafka-connect-jdbc-10.6.0.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:277)[2022-12-03 17:23:18,274] INFO Registered loader: PluginClassLoader{pluginLocation=file:/C:/confluentinc-kafka-connect-jdbc-10.6.0/lib/kafka-connect-jdbc-10.6.0.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:299)[2022-12-03 17:23:18,274] INFO Added plugin 'io.confluent.connect.jdbc.JdbcSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:18,274] INFO Added plugin 'io.confluent.connect.jdbc.JdbcSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:18,274] INFO Loading plugin from: C:\confluentinc-kafka-connect-jdbc-10.6.0\lib\mssql-jdbc-8.4.1.jre8.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:277)[2022-12-03 17:23:18,420] INFO Registered loader: PluginClassLoader{pluginLocation=file:/C:/confluentinc-kafka-connect-jdbc-10.6.0/lib/mssql-jdbc-8.4.1.jre8.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:299)[2022-12-03 17:23:18,440] INFO Loading plugin from: C:\confluentinc-kafka-connect-jdbc-10.6.0\lib\mysql-connector-java-8.0.28.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:277)[2022-12-03 17:23:18,547] INFO Registered loader: PluginClassLoader{pluginLocation=file:/C:/confluentinc-kafka-connect-jdbc-10.6.0/lib/mysql-connector-java-8.0.28.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:299)[2022-12-03 17:23:18,547] INFO Loading plugin from: C:\confluentinc-kafka-connect-jdbc-10.6.0\lib\ojdbc8-19.7.0.0.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:277)[2022-12-03 17:23:18,721] INFO Registered loader: PluginClassLoader{pluginLocation=file:/C:/confluentinc-kafka-connect-jdbc-10.6.0/lib/ojdbc8-19.7.0.0.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:299)[2022-12-03 17:23:18,738] INFO Loading plugin from: C:\confluentinc-kafka-connect-jdbc-10.6.0\lib\ons-19.7.0.0.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:277)[2022-12-03 17:23:18,754] INFO Registered loader: PluginClassLoader{pluginLocation=file:/C:/confluentinc-kafka-connect-jdbc-10.6.0/lib/ons-19.7.0.0.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:299)[2022-12-03 17:23:18,754] INFO Loading plugin from: C:\confluentinc-kafka-connect-jdbc-10.6.0\lib\oraclepki-19.7.0.0.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:277)[2022-12-03 17:23:18,786] INFO Registered loader: PluginClassLoader{pluginLocation=file:/C:/confluentinc-kafka-connect-jdbc-10.6.0/lib/oraclepki-19.7.0.0.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:299)[2022-12-03 17:23:18,786] INFO Loading plugin from: C:\confluentinc-kafka-connect-jdbc-10.6.0\lib\orai18n-19.7.0.0.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:277)[2022-12-03 17:23:18,802] INFO Registered loader: PluginClassLoader{pluginLocation=file:/C:/confluentinc-kafka-connect-jdbc-10.6.0/lib/orai18n-19.7.0.0.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:299)[2022-12-03 17:23:18,802] INFO Loading plugin from: C:\confluentinc-kafka-connect-jdbc-10.6.0\lib\osdt_cert-19.7.0.0.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:277)[2022-12-03 17:23:18,837] INFO Registered loader: PluginClassLoader{pluginLocation=file:/C:/confluentinc-kafka-connect-jdbc-10.6.0/lib/osdt_cert-19.7.0.0.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:299)[2022-12-03 17:23:18,837] INFO Loading plugin from: C:\confluentinc-kafka-connect-jdbc-10.6.0\lib\osdt_core-19.7.0.0.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:277)[2022-12-03 17:23:18,869] INFO Registered loader: PluginClassLoader{pluginLocation=file:/C:/confluentinc-kafka-connect-jdbc-10.6.0/lib/osdt_core-19.7.0.0.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:299)[2022-12-03 17:23:18,869] INFO Loading plugin from: C:\confluentinc-kafka-connect-jdbc-10.6.0\lib\postgresql-42.3.3.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:277)[2022-12-03 17:23:18,901] INFO Registered loader: PluginClassLoader{pluginLocation=file:/C:/confluentinc-kafka-connect-jdbc-10.6.0/lib/postgresql-42.3.3.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:299)[2022-12-03 17:23:18,917] INFO Loading plugin from: C:\confluentinc-kafka-connect-jdbc-10.6.0\lib\simplefan-19.7.0.0.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:277)[2022-12-03 17:23:19,060] INFO Registered loader: PluginClassLoader{pluginLocation=file:/C:/confluentinc-kafka-connect-jdbc-10.6.0/lib/simplefan-19.7.0.0.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:299)[2022-12-03 17:23:19,060] INFO Loading plugin from: C:\confluentinc-kafka-connect-jdbc-10.6.0\lib\slf4j-api-1.7.36.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:277)[2022-12-03 17:23:19,186] INFO Registered loader: PluginClassLoader{pluginLocation=file:/C:/confluentinc-kafka-connect-jdbc-10.6.0/lib/slf4j-api-1.7.36.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:299)[2022-12-03 17:23:19,186] INFO Loading plugin from: C:\confluentinc-kafka-connect-jdbc-10.6.0\lib\sqlite-jdbc-3.25.2.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:277)[2022-12-03 17:23:19,203] INFO Registered loader: PluginClassLoader{pluginLocation=file:/C:/confluentinc-kafka-connect-jdbc-10.6.0/lib/sqlite-jdbc-3.25.2.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:299)[2022-12-03 17:23:19,203] INFO Loading plugin from: C:\confluentinc-kafka-connect-jdbc-10.6.0\lib\ucp-19.7.0.0.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:277)[2022-12-03 17:23:19,270] INFO Registered loader: PluginClassLoader{pluginLocation=file:/C:/confluentinc-kafka-connect-jdbc-10.6.0/lib/ucp-19.7.0.0.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:299)[2022-12-03 17:23:19,270] INFO Loading plugin from: C:\confluentinc-kafka-connect-jdbc-10.6.0\lib\xdb-19.7.0.0.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:277)[2022-12-03 17:23:19,303] INFO Registered loader: PluginClassLoader{pluginLocation=file:/C:/confluentinc-kafka-connect-jdbc-10.6.0/lib/xdb-19.7.0.0.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:299)[2022-12-03 17:23:19,303] INFO Loading plugin from: C:\confluentinc-kafka-connect-jdbc-10.6.0\lib\xmlparserv2-19.7.0.0.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:277)[2022-12-03 17:23:19,390] INFO Registered loader: PluginClassLoader{pluginLocation=file:/C:/confluentinc-kafka-connect-jdbc-10.6.0/lib/xmlparserv2-19.7.0.0.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:299)[2022-12-03 17:23:20,049] INFO Registered loader: jdk.internal.loader.ClassLoaders$AppClassLoader@27c170f0 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:299)[2022-12-03 17:23:20,049] INFO Added plugin 'org.apache.kafka.connect.tools.VerifiableSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,049] INFO Added plugin 'org.apache.kafka.connect.tools.MockSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,049] INFO Added plugin 'org.apache.kafka.connect.tools.VerifiableSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,049] INFO Added plugin 'org.apache.kafka.connect.mirror.MirrorSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,049] INFO Added plugin 'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,049] INFO Added plugin 'org.apache.kafka.connect.tools.MockSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,049] INFO Added plugin 'org.apache.kafka.connect.tools.SchemaSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,049] INFO Added plugin 'org.apache.kafka.connect.mirror.MirrorCheckpointConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,049] INFO Added plugin 'org.apache.kafka.connect.converters.FloatConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,049] INFO Added plugin 'org.apache.kafka.connect.converters.DoubleConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,049] INFO Added plugin 'org.apache.kafka.connect.converters.ByteArrayConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,049] INFO Added plugin 'org.apache.kafka.connect.converters.LongConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,049] INFO Added plugin 'org.apache.kafka.connect.converters.IntegerConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,049] INFO Added plugin 'org.apache.kafka.connect.json.JsonConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,049] INFO Added plugin 'org.apache.kafka.connect.storage.StringConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,049] INFO Added plugin 'org.apache.kafka.connect.converters.ShortConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,049] INFO Added plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,049] INFO Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,064] INFO Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,064] INFO Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,065] INFO Added plugin 'org.apache.kafka.connect.transforms.Filter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,065] INFO Added plugin 'org.apache.kafka.connect.transforms.HeaderFrom$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,065] INFO Added plugin 'org.apache.kafka.connect.transforms.InsertField$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,065] INFO Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,065] INFO Added plugin 'org.apache.kafka.connect.transforms.MaskField$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,065] INFO Added plugin 'org.apache.kafka.connect.transforms.TimestampRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,065] INFO Added plugin 'org.apache.kafka.connect.transforms.RegexRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,065] INFO Added plugin 'org.apache.kafka.connect.transforms.HoistField$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,065] INFO Added plugin 'org.apache.kafka.connect.transforms.ValueToKey' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,065] INFO Added plugin 'org.apache.kafka.connect.transforms.MaskField$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,065] INFO Added plugin 'org.apache.kafka.connect.transforms.DropHeaders' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,065] INFO Added plugin 'org.apache.kafka.connect.transforms.Cast$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,065] INFO Added plugin 'org.apache.kafka.connect.transforms.Cast$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,065] INFO Added plugin 'org.apache.kafka.connect.runtime.PredicatedTransformation' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,065] INFO Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,065] INFO Added plugin 'org.apache.kafka.connect.transforms.Flatten$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,065] INFO Added plugin 'org.apache.kafka.connect.transforms.InsertHeader' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,069] INFO Added plugin 'org.apache.kafka.connect.transforms.InsertField$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,069] INFO Added plugin 'org.apache.kafka.connect.transforms.Flatten$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,069] INFO Added plugin 'org.apache.kafka.connect.transforms.HeaderFrom$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,069] INFO Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,069] INFO Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,069] INFO Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,069] INFO Added plugin 'org.apache.kafka.connect.transforms.HoistField$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,069] INFO Added plugin 'org.apache.kafka.connect.transforms.predicates.HasHeaderKey' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,069] INFO Added plugin 'org.apache.kafka.connect.transforms.predicates.RecordIsTombstone' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,069] INFO Added plugin 'org.apache.kafka.connect.transforms.predicates.TopicNameMatches' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,069] INFO Added plugin 'org.apache.kafka.common.config.provider.FileConfigProvider' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,069] INFO Added plugin 'org.apache.kafka.common.config.provider.DirectoryConfigProvider' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,069] INFO Added plugin 'org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:230)[2022-12-03 17:23:20,069] INFO Added aliases 'JdbcSinkConnector' and 'JdbcSink' to plugin 'io.confluent.connect.jdbc.JdbcSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'JdbcSourceConnector' and 'JdbcSource' to plugin 'io.confluent.connect.jdbc.JdbcSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'MirrorCheckpointConnector' and 'MirrorCheckpoint' to plugin 'org.apache.kafka.connect.mirror.MirrorCheckpointConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'MirrorHeartbeatConnector' and 'MirrorHeartbeat' to plugin 'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'MirrorSourceConnector' and 'MirrorSource' to plugin 'org.apache.kafka.connect.mirror.MirrorSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'MockSinkConnector' and 'MockSink' to plugin 'org.apache.kafka.connect.tools.MockSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'MockSourceConnector' and 'MockSource' to plugin 'org.apache.kafka.connect.tools.MockSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'SchemaSourceConnector' and 'SchemaSource' to plugin 'org.apache.kafka.connect.tools.SchemaSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'VerifiableSinkConnector' and 'VerifiableSink' to plugin 'org.apache.kafka.connect.tools.VerifiableSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'VerifiableSourceConnector' and 'VerifiableSource' to plugin 'org.apache.kafka.connect.tools.VerifiableSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'ByteArrayConverter' and 'ByteArray' to plugin 'org.apache.kafka.connect.converters.ByteArrayConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'DoubleConverter' and 'Double' to plugin 'org.apache.kafka.connect.converters.DoubleConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'FloatConverter' and 'Float' to plugin 'org.apache.kafka.connect.converters.FloatConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'IntegerConverter' and 'Integer' to plugin 'org.apache.kafka.connect.converters.IntegerConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'LongConverter' and 'Long' to plugin 'org.apache.kafka.connect.converters.LongConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'ShortConverter' and 'Short' to plugin 'org.apache.kafka.connect.converters.ShortConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'JsonConverter' and 'Json' to plugin 'org.apache.kafka.connect.json.JsonConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'StringConverter' and 'String' to plugin 'org.apache.kafka.connect.storage.StringConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'ByteArrayConverter' and 'ByteArray' to plugin 'org.apache.kafka.connect.converters.ByteArrayConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'DoubleConverter' and 'Double' to plugin 'org.apache.kafka.connect.converters.DoubleConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'FloatConverter' and 'Float' to plugin 'org.apache.kafka.connect.converters.FloatConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'IntegerConverter' and 'Integer' to plugin 'org.apache.kafka.connect.converters.IntegerConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'LongConverter' and 'Long' to plugin 'org.apache.kafka.connect.converters.LongConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'ShortConverter' and 'Short' to plugin 'org.apache.kafka.connect.converters.ShortConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'JsonConverter' and 'Json' to plugin 'org.apache.kafka.connect.json.JsonConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'SimpleHeaderConverter' and 'Simple' to plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'StringConverter' and 'String' to plugin 'org.apache.kafka.connect.storage.StringConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'PredicatedTransformation' and 'Predicated' to plugin 'org.apache.kafka.connect.runtime.PredicatedTransformation' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added alias 'DropHeaders' to plugin 'org.apache.kafka.connect.transforms.DropHeaders' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:473)[2022-12-03 17:23:20,069] INFO Added alias 'Filter' to plugin 'org.apache.kafka.connect.transforms.Filter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:473)[2022-12-03 17:23:20,069] INFO Added alias 'InsertHeader' to plugin 'org.apache.kafka.connect.transforms.InsertHeader' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:473)[2022-12-03 17:23:20,069] INFO Added alias 'RegexRouter' to plugin 'org.apache.kafka.connect.transforms.RegexRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:473)[2022-12-03 17:23:20,069] INFO Added alias 'TimestampRouter' to plugin 'org.apache.kafka.connect.transforms.TimestampRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:473)[2022-12-03 17:23:20,069] INFO Added alias 'ValueToKey' to plugin 'org.apache.kafka.connect.transforms.ValueToKey' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:473)[2022-12-03 17:23:20,069] INFO Added alias 'HasHeaderKey' to plugin 'org.apache.kafka.connect.transforms.predicates.HasHeaderKey' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:473)[2022-12-03 17:23:20,069] INFO Added alias 'RecordIsTombstone' to plugin 'org.apache.kafka.connect.transforms.predicates.RecordIsTombstone' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:473)[2022-12-03 17:23:20,069] INFO Added alias 'TopicNameMatches' to plugin 'org.apache.kafka.connect.transforms.predicates.TopicNameMatches' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:473)[2022-12-03 17:23:20,069] INFO Added alias 'BasicAuthSecurityRestExtension' to plugin 'org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:473)[2022-12-03 17:23:20,069] INFO Added aliases 'AllConnectorClientConfigOverridePolicy' and 'All' to plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'NoneConnectorClientConfigOverridePolicy' and 'None' to plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,069] INFO Added aliases 'PrincipalConnectorClientConfigOverridePolicy' and 'Principal' to plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:476)[2022-12-03 17:23:20,103] INFO DistributedConfig values: access.control.allow.methods = access.control.allow.origin = admin.listeners = null bootstrap.servers = [localhost:9092] client.dns.lookup = use_all_dns_ips client.id = config.providers = [] config.storage.replication.factor = 1 config.storage.topic = connect-configs connect.protocol = sessioned connections.max.idle.ms = 540000 connector.client.config.override.policy = All group.id = connect-cluster header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter heartbeat.interval.ms = 3000 inter.worker.key.generation.algorithm = HmacSHA256 inter.worker.key.size = null inter.worker.key.ttl.ms = 3600000 inter.worker.signature.algorithm = HmacSHA256 inter.worker.verification.algorithms = [HmacSHA256] key.converter = class org.apache.kafka.connect.json.JsonConverter listeners = [http://:8083] metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 offset.flush.interval.ms = 10000 offset.flush.timeout.ms = 5000 offset.storage.partitions = 25 offset.storage.replication.factor = 1 offset.storage.topic = connect-offsets plugin.path = [C:\confluentinc-kafka-connect-jdbc-10.6.0\lib] rebalance.timeout.ms = 60000 receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 40000 response.http.headers.config = rest.advertised.host.name = null rest.advertised.listener = null rest.advertised.port = null rest.extension.classes = [] retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null scheduled.rebalance.max.delay.ms = 300000 security.protocol = PLAINTEXT send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS status.storage.partitions = 5 status.storage.replication.factor = 1 status.storage.topic = connect-status task.shutdown.graceful.timeout.ms = 5000 topic.creation.enable = true topic.tracking.allow.reset = true topic.tracking.enable = true value.converter = class org.apache.kafka.connect.json.JsonConverter worker.sync.timeout.ms = 3000 worker.unsync.backoff.ms = 300000 (org.apache.kafka.connect.runtime.distributed.DistributedConfig:376)[2022-12-03 17:23:20,103] INFO Creating Kafka admin client (org.apache.kafka.connect.util.ConnectUtils:51)[2022-12-03 17:23:20,119] INFO AdminClientConfig values: bootstrap.servers = [localhost:9092] client.dns.lookup = use_all_dns_ips client.id = connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS (org.apache.kafka.clients.admin.AdminClientConfig:376)[2022-12-03 17:23:20,176] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,176] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,176] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,176] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,176] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,176] WARN The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,176] WARN The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,176] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,176] WARN The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,176] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,176] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,176] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,176] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,182] INFO Kafka version: 7.2.2-ccs (org.apache.kafka.common.utils.AppInfoParser:119)[2022-12-03 17:23:20,182] INFO Kafka commitId: b1f098d4b86fb0e9bc6cc86da16ad16e8cd26ebd (org.apache.kafka.common.utils.AppInfoParser:120)[2022-12-03 17:23:20,182] INFO Kafka startTimeMs: 1670055800182 (org.apache.kafka.common.utils.AppInfoParser:121)[2022-12-03 17:23:20,362] INFO Kafka cluster ID: PMGnJFVHRbqJ0BHd__SNpg (org.apache.kafka.connect.util.ConnectUtils:67)[2022-12-03 17:23:20,375] INFO App info kafka.admin.client for adminclient-1 unregistered (org.apache.kafka.common.utils.AppInfoParser:83)[2022-12-03 17:23:20,380] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:659)[2022-12-03 17:23:20,380] INFO Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:663)[2022-12-03 17:23:20,381] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:669)[2022-12-03 17:23:20,382] INFO Logging initialized @5709ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log:170)[2022-12-03 17:23:20,414] INFO Added connector for http://:8083 (org.apache.kafka.connect.runtime.rest.RestServer:117)[2022-12-03 17:23:20,414] INFO Initializing REST server (org.apache.kafka.connect.runtime.rest.RestServer:188)[2022-12-03 17:23:20,414] INFO jetty-9.4.48.v20220622; built: 2022-06-21T20:42:25.880Z; git: 6b67c5719d1f4371b33655ff2d047d24e171e49a; jvm 18.0.1.1+2-6 (org.eclipse.jetty.server.Server:375)[2022-12-03 17:23:20,435] INFO Started http_8083@3eee3e2b{HTTP/1.1, (http/1.1)}{0.0.0.0:8083} (org.eclipse.jetty.server.AbstractConnector:333)[2022-12-03 17:23:20,435] INFO Started @5758ms (org.eclipse.jetty.server.Server:415)[2022-12-03 17:23:20,435] INFO Advertised URI: http://192.168.43.212:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:355)[2022-12-03 17:23:20,451] INFO REST server listening at http://192.168.43.212:8083/, advertising URL http://192.168.43.212:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:203)[2022-12-03 17:23:20,451] INFO Advertised URI: http://192.168.43.212:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:355)[2022-12-03 17:23:20,451] INFO REST admin endpoints at http://192.168.43.212:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:204)[2022-12-03 17:23:20,451] INFO Advertised URI: http://192.168.43.212:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:355)[2022-12-03 17:23:20,451] INFO Creating Kafka admin client (org.apache.kafka.connect.util.ConnectUtils:51)[2022-12-03 17:23:20,451] INFO AdminClientConfig values: bootstrap.servers = [localhost:9092] client.dns.lookup = use_all_dns_ips client.id = connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS (org.apache.kafka.clients.admin.AdminClientConfig:376)[2022-12-03 17:23:20,451] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,451] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,451] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,451] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,451] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,451] WARN The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,451] WARN The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,451] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,451] WARN The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,451] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,451] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,451] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,451] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,451] INFO Kafka version: 7.2.2-ccs (org.apache.kafka.common.utils.AppInfoParser:119)[2022-12-03 17:23:20,451] INFO Kafka commitId: b1f098d4b86fb0e9bc6cc86da16ad16e8cd26ebd (org.apache.kafka.common.utils.AppInfoParser:120)[2022-12-03 17:23:20,451] INFO Kafka startTimeMs: 1670055800451 (org.apache.kafka.common.utils.AppInfoParser:121)[2022-12-03 17:23:20,475] INFO Kafka cluster ID: PMGnJFVHRbqJ0BHd__SNpg (org.apache.kafka.connect.util.ConnectUtils:67)[2022-12-03 17:23:20,480] INFO App info kafka.admin.client for adminclient-2 unregistered (org.apache.kafka.common.utils.AppInfoParser:83)[2022-12-03 17:23:20,482] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:659)[2022-12-03 17:23:20,482] INFO Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:663)[2022-12-03 17:23:20,482] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:669)[2022-12-03 17:23:20,484] INFO Setting up All Policy for ConnectorClientConfigOverride. This will allow all client configurations to be overridden (org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy:44)[2022-12-03 17:23:20,484] INFO Creating Kafka admin client (org.apache.kafka.connect.util.ConnectUtils:51)[2022-12-03 17:23:20,484] INFO AdminClientConfig values: bootstrap.servers = [localhost:9092] client.dns.lookup = use_all_dns_ips client.id = connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS (org.apache.kafka.clients.admin.AdminClientConfig:376)[2022-12-03 17:23:20,484] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,484] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,484] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,484] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,484] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,484] WARN The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,484] WARN The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,484] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,484] WARN The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,484] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,484] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,484] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,484] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,484] INFO Kafka version: 7.2.2-ccs (org.apache.kafka.common.utils.AppInfoParser:119)[2022-12-03 17:23:20,484] INFO Kafka commitId: b1f098d4b86fb0e9bc6cc86da16ad16e8cd26ebd (org.apache.kafka.common.utils.AppInfoParser:120)[2022-12-03 17:23:20,484] INFO Kafka startTimeMs: 1670055800484 (org.apache.kafka.common.utils.AppInfoParser:121)[2022-12-03 17:23:20,504] INFO Kafka cluster ID: PMGnJFVHRbqJ0BHd__SNpg (org.apache.kafka.connect.util.ConnectUtils:67)[2022-12-03 17:23:20,507] INFO App info kafka.admin.client for adminclient-3 unregistered (org.apache.kafka.common.utils.AppInfoParser:83)[2022-12-03 17:23:20,509] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:659)[2022-12-03 17:23:20,509] INFO Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:663)[2022-12-03 17:23:20,509] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:669)[2022-12-03 17:23:20,513] INFO Kafka version: 7.2.2-ccs (org.apache.kafka.common.utils.AppInfoParser:119)[2022-12-03 17:23:20,513] INFO Kafka commitId: b1f098d4b86fb0e9bc6cc86da16ad16e8cd26ebd (org.apache.kafka.common.utils.AppInfoParser:120)[2022-12-03 17:23:20,513] INFO Kafka startTimeMs: 1670055800513 (org.apache.kafka.common.utils.AppInfoParser:121)[2022-12-03 17:23:20,569] INFO JsonConverterConfig values: converter.type = key decimal.format = BASE64 schemas.cache.size = 1000 schemas.enable = false (org.apache.kafka.connect.json.JsonConverterConfig:376)[2022-12-03 17:23:20,586] INFO JsonConverterConfig values: converter.type = value decimal.format = BASE64 schemas.cache.size = 1000 schemas.enable = false (org.apache.kafka.connect.json.JsonConverterConfig:376)[2022-12-03 17:23:20,586] INFO Creating Kafka admin client (org.apache.kafka.connect.util.ConnectUtils:51)[2022-12-03 17:23:20,586] INFO AdminClientConfig values: bootstrap.servers = [localhost:9092] client.dns.lookup = use_all_dns_ips client.id = connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS (org.apache.kafka.clients.admin.AdminClientConfig:376)[2022-12-03 17:23:20,586] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,586] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,586] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,586] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,586] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,586] WARN The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,586] WARN The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,586] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,586] WARN The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,586] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,586] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,586] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,586] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,586] INFO Kafka version: 7.2.2-ccs (org.apache.kafka.common.utils.AppInfoParser:119)[2022-12-03 17:23:20,586] INFO Kafka commitId: b1f098d4b86fb0e9bc6cc86da16ad16e8cd26ebd (org.apache.kafka.common.utils.AppInfoParser:120)[2022-12-03 17:23:20,586] INFO Kafka startTimeMs: 1670055800586 (org.apache.kafka.common.utils.AppInfoParser:121)[2022-12-03 17:23:20,586] INFO Kafka cluster ID: PMGnJFVHRbqJ0BHd__SNpg (org.apache.kafka.connect.util.ConnectUtils:67)[2022-12-03 17:23:20,600] INFO App info kafka.admin.client for adminclient-4 unregistered (org.apache.kafka.common.utils.AppInfoParser:83)[2022-12-03 17:23:20,603] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:659)[2022-12-03 17:23:20,603] INFO Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:663)[2022-12-03 17:23:20,603] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:669)[2022-12-03 17:23:20,605] INFO Creating Kafka admin client (org.apache.kafka.connect.util.ConnectUtils:51)[2022-12-03 17:23:20,605] INFO AdminClientConfig values: bootstrap.servers = [localhost:9092] client.dns.lookup = use_all_dns_ips client.id = connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS (org.apache.kafka.clients.admin.AdminClientConfig:376)[2022-12-03 17:23:20,605] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,605] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,605] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,605] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,605] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,605] WARN The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,605] WARN The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,605] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,605] WARN The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,605] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,605] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,605] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,605] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,605] INFO Kafka version: 7.2.2-ccs (org.apache.kafka.common.utils.AppInfoParser:119)[2022-12-03 17:23:20,605] INFO Kafka commitId: b1f098d4b86fb0e9bc6cc86da16ad16e8cd26ebd (org.apache.kafka.common.utils.AppInfoParser:120)[2022-12-03 17:23:20,605] INFO Kafka startTimeMs: 1670055800605 (org.apache.kafka.common.utils.AppInfoParser:121)[2022-12-03 17:23:20,620] INFO Kafka cluster ID: PMGnJFVHRbqJ0BHd__SNpg (org.apache.kafka.connect.util.ConnectUtils:67)[2022-12-03 17:23:20,622] INFO App info kafka.admin.client for adminclient-5 unregistered (org.apache.kafka.common.utils.AppInfoParser:83)[2022-12-03 17:23:20,623] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:659)[2022-12-03 17:23:20,623] INFO Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:663)[2022-12-03 17:23:20,624] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:669)[2022-12-03 17:23:20,624] INFO Creating Kafka admin client (org.apache.kafka.connect.util.ConnectUtils:51)[2022-12-03 17:23:20,624] INFO AdminClientConfig values: bootstrap.servers = [localhost:9092] client.dns.lookup = use_all_dns_ips client.id = connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS (org.apache.kafka.clients.admin.AdminClientConfig:376)[2022-12-03 17:23:20,624] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,624] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,624] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,624] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,624] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,624] WARN The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,624] WARN The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,624] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,624] WARN The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,624] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,624] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,624] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,624] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,624] INFO Kafka version: 7.2.2-ccs (org.apache.kafka.common.utils.AppInfoParser:119)[2022-12-03 17:23:20,624] INFO Kafka commitId: b1f098d4b86fb0e9bc6cc86da16ad16e8cd26ebd (org.apache.kafka.common.utils.AppInfoParser:120)[2022-12-03 17:23:20,624] INFO Kafka startTimeMs: 1670055800624 (org.apache.kafka.common.utils.AppInfoParser:121)[2022-12-03 17:23:20,624] INFO Kafka cluster ID: PMGnJFVHRbqJ0BHd__SNpg (org.apache.kafka.connect.util.ConnectUtils:67)[2022-12-03 17:23:20,639] INFO App info kafka.admin.client for adminclient-6 unregistered (org.apache.kafka.common.utils.AppInfoParser:83)[2022-12-03 17:23:20,639] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:659)[2022-12-03 17:23:20,639] INFO Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:663)[2022-12-03 17:23:20,640] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:669)[2022-12-03 17:23:20,642] INFO Creating Kafka admin client (org.apache.kafka.connect.util.ConnectUtils:51)[2022-12-03 17:23:20,642] INFO AdminClientConfig values: bootstrap.servers = [localhost:9092] client.dns.lookup = use_all_dns_ips client.id = connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS (org.apache.kafka.clients.admin.AdminClientConfig:376)[2022-12-03 17:23:20,642] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,642] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,642] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,642] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,642] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,642] WARN The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,642] WARN The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,642] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,642] WARN The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,642] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,642] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,642] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,642] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,642] INFO Kafka version: 7.2.2-ccs (org.apache.kafka.common.utils.AppInfoParser:119)[2022-12-03 17:23:20,642] INFO Kafka commitId: b1f098d4b86fb0e9bc6cc86da16ad16e8cd26ebd (org.apache.kafka.common.utils.AppInfoParser:120)[2022-12-03 17:23:20,642] INFO Kafka startTimeMs: 1670055800642 (org.apache.kafka.common.utils.AppInfoParser:121)[2022-12-03 17:23:20,658] INFO Kafka cluster ID: PMGnJFVHRbqJ0BHd__SNpg (org.apache.kafka.connect.util.ConnectUtils:67)[2022-12-03 17:23:20,661] INFO App info kafka.admin.client for adminclient-7 unregistered (org.apache.kafka.common.utils.AppInfoParser:83)[2022-12-03 17:23:20,662] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:659)[2022-12-03 17:23:20,662] INFO Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:663)[2022-12-03 17:23:20,662] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:669)[2022-12-03 17:23:20,663] INFO Kafka version: 7.2.2-ccs (org.apache.kafka.common.utils.AppInfoParser:119)[2022-12-03 17:23:20,663] INFO Kafka commitId: b1f098d4b86fb0e9bc6cc86da16ad16e8cd26ebd (org.apache.kafka.common.utils.AppInfoParser:120)[2022-12-03 17:23:20,663] INFO Kafka startTimeMs: 1670055800663 (org.apache.kafka.common.utils.AppInfoParser:121)[2022-12-03 17:23:20,663] INFO Kafka Connect distributed worker initialization took 2628ms (org.apache.kafka.connect.cli.ConnectDistributed:138)[2022-12-03 17:23:20,663] INFO Kafka Connect starting (org.apache.kafka.connect.runtime.Connect:51)[2022-12-03 17:23:20,663] INFO Initializing REST resources (org.apache.kafka.connect.runtime.rest.RestServer:208)[2022-12-03 17:23:20,663] INFO [Worker clientId=connect-1, groupId=connect-cluster] Herder starting (org.apache.kafka.connect.runtime.distributed.DistributedHerder:318)[2022-12-03 17:23:20,663] INFO Worker starting (org.apache.kafka.connect.runtime.Worker:184)[2022-12-03 17:23:20,663] INFO Starting KafkaOffsetBackingStore (org.apache.kafka.connect.storage.KafkaOffsetBackingStore:150)[2022-12-03 17:23:20,663] INFO Starting KafkaBasedLog with topic connect-offsets (org.apache.kafka.connect.util.KafkaBasedLog:166)[2022-12-03 17:23:20,679] INFO AdminClientConfig values: bootstrap.servers = [localhost:9092] client.dns.lookup = use_all_dns_ips client.id = connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS (org.apache.kafka.clients.admin.AdminClientConfig:376)[2022-12-03 17:23:20,679] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,679] WARN The configuration 'metrics.context.connect.group.id' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,679] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,679] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,679] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,679] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,679] WARN The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,679] WARN The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,679] WARN The configuration 'metrics.context.connect.kafka.cluster.id' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,679] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,679] WARN The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,679] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,679] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,679] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,679] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:384)[2022-12-03 17:23:20,679] INFO Kafka version: 7.2.2-ccs (org.apache.kafka.common.utils.AppInfoParser:119)[2022-12-03 17:23:20,679] INFO Kafka commitId: b1f098d4b86fb0e9bc6cc86da16ad16e8cd26ebd (org.apache.kafka.common.utils.AppInfoParser:120)[2022-12-03 17:23:20,679] INFO Kafka startTimeMs: 1670055800679 (org.apache.kafka.common.utils.AppInfoParser:121)[2022-12-03 17:23:20,691] INFO Adding admin resources to main listener (org.apache.kafka.connect.runtime.rest.RestServer:225)[2022-12-03 17:23:20,737] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session:334)[2022-12-03 17:23:20,737] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session:339)[2022-12-03 17:23:20,737] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session:132)12월 03, 2022 5:23:20 오후 org.glassfish.jersey.internal.inject.Providers checkProviderRuntimeWARNING: A provider org.apache.kafka.connect.runtime.rest.resources.RootResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.RootResource will be ignored.12월 03, 2022 5:23:20 오후 org.glassfish.jersey.internal.inject.Providers checkProviderRuntimeWARNING: A provider org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource will be ignored.12월 03, 2022 5:23:20 오후 org.glassfish.jersey.internal.inject.Providers checkProviderRuntimeWARNING: A provider org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource will be ignored.12월 03, 2022 5:23:20 오후 org.glassfish.jersey.internal.inject.Providers checkProviderRuntimeWARNING: A provider org.apache.kafka.connect.runtime.rest.resources.LoggingResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.LoggingResource will be ignored.12월 03, 2022 5:23:21 오후 org.glassfish.jersey.internal.Errors logErrorsWARNING: The following warnings have been detected: WARNING: The (sub)resource method listLoggers in org.apache.kafka.connect.runtime.rest.resources.LoggingResource contains empty path annotation.WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.WARNING: The (sub)resource method listConnectorPlugins in org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource contains empty path annotation.WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation.[2022-12-03 17:23:21,074] INFO Started o.e.j.s.ServletContextHandler@75ae4a1f{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:921)[2022-12-03 17:23:21,074] INFO REST resources initialized; server is started and ready to handle requests (org.apache.kafka.connect.runtime.rest.RestServer:303)[2022-12-03 17:23:21,074] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:57)[2022-12-03 17:23:21,186] INFO Created topic (name=connect-offsets, numPartitions=25, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact}) on brokers at localhost:9092 (org.apache.kafka.connect.util.TopicAdmin:389)[2022-12-03 17:23:21,190] INFO ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [localhost:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = producer-1 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig:376)[2022-12-03 17:23:21,190] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,190] WARN The configuration 'metrics.context.connect.group.id' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,190] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,190] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,203] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,203] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,203] WARN The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,203] WARN The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,203] WARN The configuration 'metrics.context.connect.kafka.cluster.id' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,203] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,208] WARN The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,203] INFO [Producer clientId=producer-1] Cluster ID: PMGnJFVHRbqJ0BHd__SNpg (org.apache.kafka.clients.Metadata:287)[2022-12-03 17:23:21,208] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,209] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,209] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,209] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,209] INFO Kafka version: 7.2.2-ccs (org.apache.kafka.common.utils.AppInfoParser:119)[2022-12-03 17:23:21,209] INFO Kafka commitId: b1f098d4b86fb0e9bc6cc86da16ad16e8cd26ebd (org.apache.kafka.common.utils.AppInfoParser:120)[2022-12-03 17:23:21,209] INFO Kafka startTimeMs: 1670055801209 (org.apache.kafka.common.utils.AppInfoParser:121)[2022-12-03 17:23:21,209] INFO ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = earliest bootstrap.servers = [localhost:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = consumer-connect-cluster-1 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = connect-cluster group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig:376)[2022-12-03 17:23:21,225] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,225] WARN The configuration 'metrics.context.connect.group.id' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,225] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,225] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,225] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,225] WARN The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,225] WARN The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,225] WARN The configuration 'metrics.context.connect.kafka.cluster.id' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,225] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,225] WARN The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,225] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,225] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,225] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,225] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,225] INFO Kafka version: 7.2.2-ccs (org.apache.kafka.common.utils.AppInfoParser:119)[2022-12-03 17:23:21,225] INFO Kafka commitId: b1f098d4b86fb0e9bc6cc86da16ad16e8cd26ebd (org.apache.kafka.common.utils.AppInfoParser:120)[2022-12-03 17:23:21,225] INFO Kafka startTimeMs: 1670055801225 (org.apache.kafka.common.utils.AppInfoParser:121)[2022-12-03 17:23:21,225] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Cluster ID: PMGnJFVHRbqJ0BHd__SNpg (org.apache.kafka.clients.Metadata:287)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Subscribed to partition(s): connect-offsets-0, connect-offsets-5, connect-offsets-10, connect-offsets-20, connect-offsets-15, connect-offsets-9, connect-offsets-11, connect-offsets-4, connect-offsets-16, connect-offsets-17, connect-offsets-3, connect-offsets-24, connect-offsets-23, connect-offsets-13, connect-offsets-18, connect-offsets-22, connect-offsets-8, connect-offsets-2, connect-offsets-12, connect-offsets-19, connect-offsets-14, connect-offsets-1, connect-offsets-6, connect-offsets-7, connect-offsets-21 (org.apache.kafka.clients.consumer.KafkaConsumer:1123)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-0 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-5 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-10 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-20 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-15 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-9 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-11 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-4 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-16 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-17 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-3 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-24 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-23 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-13 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-18 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-22 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-8 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-2 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-12 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-19 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-14 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-1 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-6 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-7 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,241] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-offsets-21 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,257] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-0 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,257] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-5 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,257] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-10 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,269] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-20 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,269] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-15 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,269] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-9 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,269] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-11 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,269] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-4 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,269] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-16 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,269] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-17 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,272] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-3 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,274] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-24 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,277] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-23 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-13 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-18 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-22 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-8 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-2 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-12 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-19 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-14 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-1 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-6 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-7 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-21 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-10 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-8 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-14 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-12 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-2 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-6 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-4 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-24 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-18 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-16 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-22 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-20 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-9 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-7 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-13 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-11 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-1 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-5 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,278] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-3 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,293] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-23 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,294] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-17 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,294] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-15 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,294] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-21 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,294] INFO [Consumer clientId=consumer-connect-cluster-1, groupId=connect-cluster] Resetting offset for partition connect-offsets-19 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,294] INFO Finished reading KafkaBasedLog for topic connect-offsets (org.apache.kafka.connect.util.KafkaBasedLog:206)[2022-12-03 17:23:21,294] INFO Started KafkaBasedLog for topic connect-offsets (org.apache.kafka.connect.util.KafkaBasedLog:208)[2022-12-03 17:23:21,294] INFO Finished reading offsets topic and starting KafkaOffsetBackingStore (org.apache.kafka.connect.storage.KafkaOffsetBackingStore:152)[2022-12-03 17:23:21,294] INFO Worker started (org.apache.kafka.connect.runtime.Worker:191)[2022-12-03 17:23:21,294] INFO Starting KafkaBasedLog with topic connect-status (org.apache.kafka.connect.util.KafkaBasedLog:166)[2022-12-03 17:23:21,369] INFO Created topic (name=connect-status, numPartitions=5, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact}) on brokers at localhost:9092 (org.apache.kafka.connect.util.TopicAdmin:389)[2022-12-03 17:23:21,369] INFO ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [localhost:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = producer-2 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 0 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig:376)[2022-12-03 17:23:21,372] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,372] WARN The configuration 'metrics.context.connect.group.id' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,372] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,372] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,372] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,372] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,372] WARN The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,372] WARN The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,372] WARN The configuration 'metrics.context.connect.kafka.cluster.id' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,372] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,372] WARN The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,372] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,372] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,372] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,372] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,381] INFO Kafka version: 7.2.2-ccs (org.apache.kafka.common.utils.AppInfoParser:119)[2022-12-03 17:23:21,382] INFO Kafka commitId: b1f098d4b86fb0e9bc6cc86da16ad16e8cd26ebd (org.apache.kafka.common.utils.AppInfoParser:120)[2022-12-03 17:23:21,381] INFO [Producer clientId=producer-2] Cluster ID: PMGnJFVHRbqJ0BHd__SNpg (org.apache.kafka.clients.Metadata:287)[2022-12-03 17:23:21,383] INFO Kafka startTimeMs: 1670055801381 (org.apache.kafka.common.utils.AppInfoParser:121)[2022-12-03 17:23:21,383] INFO ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = earliest bootstrap.servers = [localhost:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = consumer-connect-cluster-2 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = connect-cluster group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig:376)[2022-12-03 17:23:21,383] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,383] WARN The configuration 'metrics.context.connect.group.id' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,383] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,383] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,383] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,383] WARN The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,383] WARN The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,383] WARN The configuration 'metrics.context.connect.kafka.cluster.id' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,383] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,383] WARN The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,383] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,383] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,383] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,383] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,383] INFO Kafka version: 7.2.2-ccs (org.apache.kafka.common.utils.AppInfoParser:119)[2022-12-03 17:23:21,383] INFO Kafka commitId: b1f098d4b86fb0e9bc6cc86da16ad16e8cd26ebd (org.apache.kafka.common.utils.AppInfoParser:120)[2022-12-03 17:23:21,383] INFO Kafka startTimeMs: 1670055801383 (org.apache.kafka.common.utils.AppInfoParser:121)[2022-12-03 17:23:21,399] INFO [Consumer clientId=consumer-connect-cluster-2, groupId=connect-cluster] Cluster ID: PMGnJFVHRbqJ0BHd__SNpg (org.apache.kafka.clients.Metadata:287)[2022-12-03 17:23:21,402] INFO [Consumer clientId=consumer-connect-cluster-2, groupId=connect-cluster] Subscribed to partition(s): connect-status-0, connect-status-4, connect-status-1, connect-status-2, connect-status-3 (org.apache.kafka.clients.consumer.KafkaConsumer:1123)[2022-12-03 17:23:21,402] INFO [Consumer clientId=consumer-connect-cluster-2, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-status-0 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,402] INFO [Consumer clientId=consumer-connect-cluster-2, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-status-4 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,402] INFO [Consumer clientId=consumer-connect-cluster-2, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-status-1 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,403] INFO [Consumer clientId=consumer-connect-cluster-2, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-status-2 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,403] INFO [Consumer clientId=consumer-connect-cluster-2, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-status-3 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,410] INFO [Consumer clientId=consumer-connect-cluster-2, groupId=connect-cluster] Resetting the last seen epoch of partition connect-status-0 to 0 since the associated topicId changed from null to ym5u2s8ySty4G9yhuA_gcQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,410] INFO [Consumer clientId=consumer-connect-cluster-2, groupId=connect-cluster] Resetting the last seen epoch of partition connect-status-4 to 0 since the associated topicId changed from null to ym5u2s8ySty4G9yhuA_gcQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,410] INFO [Consumer clientId=consumer-connect-cluster-2, groupId=connect-cluster] Resetting the last seen epoch of partition connect-status-1 to 0 since the associated topicId changed from null to ym5u2s8ySty4G9yhuA_gcQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,410] INFO [Consumer clientId=consumer-connect-cluster-2, groupId=connect-cluster] Resetting the last seen epoch of partition connect-status-2 to 0 since the associated topicId changed from null to ym5u2s8ySty4G9yhuA_gcQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,410] INFO [Consumer clientId=consumer-connect-cluster-2, groupId=connect-cluster] Resetting the last seen epoch of partition connect-status-3 to 0 since the associated topicId changed from null to ym5u2s8ySty4G9yhuA_gcQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,410] INFO [Consumer clientId=consumer-connect-cluster-2, groupId=connect-cluster] Resetting offset for partition connect-status-1 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,410] INFO [Consumer clientId=consumer-connect-cluster-2, groupId=connect-cluster] Resetting offset for partition connect-status-2 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,410] INFO [Consumer clientId=consumer-connect-cluster-2, groupId=connect-cluster] Resetting offset for partition connect-status-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,410] INFO [Consumer clientId=consumer-connect-cluster-2, groupId=connect-cluster] Resetting offset for partition connect-status-3 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,410] INFO [Consumer clientId=consumer-connect-cluster-2, groupId=connect-cluster] Resetting offset for partition connect-status-4 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,410] INFO Finished reading KafkaBasedLog for topic connect-status (org.apache.kafka.connect.util.KafkaBasedLog:206)[2022-12-03 17:23:21,410] INFO Started KafkaBasedLog for topic connect-status (org.apache.kafka.connect.util.KafkaBasedLog:208)[2022-12-03 17:23:21,410] INFO Starting KafkaConfigBackingStore (org.apache.kafka.connect.storage.KafkaConfigBackingStore:291)[2022-12-03 17:23:21,410] INFO Starting KafkaBasedLog with topic connect-configs (org.apache.kafka.connect.util.KafkaBasedLog:166)[2022-12-03 17:23:21,447] INFO Created topic (name=connect-configs, numPartitions=1, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact}) on brokers at localhost:9092 (org.apache.kafka.connect.util.TopicAdmin:389)[2022-12-03 17:23:21,447] INFO ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [localhost:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = producer-3 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig:376)[2022-12-03 17:23:21,447] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,447] WARN The configuration 'metrics.context.connect.group.id' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,447] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,447] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,447] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,447] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,447] WARN The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,447] WARN The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,447] WARN The configuration 'metrics.context.connect.kafka.cluster.id' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,447] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,447] WARN The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,447] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,447] INFO [Producer clientId=producer-3] Cluster ID: PMGnJFVHRbqJ0BHd__SNpg (org.apache.kafka.clients.Metadata:287)[2022-12-03 17:23:21,447] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,459] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,459] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:384)[2022-12-03 17:23:21,459] INFO Kafka version: 7.2.2-ccs (org.apache.kafka.common.utils.AppInfoParser:119)[2022-12-03 17:23:21,459] INFO Kafka commitId: b1f098d4b86fb0e9bc6cc86da16ad16e8cd26ebd (org.apache.kafka.common.utils.AppInfoParser:120)[2022-12-03 17:23:21,459] INFO Kafka startTimeMs: 1670055801459 (org.apache.kafka.common.utils.AppInfoParser:121)[2022-12-03 17:23:21,459] INFO ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.offset.reset = earliest bootstrap.servers = [localhost:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = consumer-connect-cluster-3 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = connect-cluster group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig:376)[2022-12-03 17:23:21,459] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,459] WARN The configuration 'metrics.context.connect.group.id' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,459] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,459] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,459] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,459] WARN The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,459] WARN The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,459] WARN The configuration 'metrics.context.connect.kafka.cluster.id' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,459] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,459] WARN The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,459] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,459] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,459] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,459] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)[2022-12-03 17:23:21,459] INFO Kafka version: 7.2.2-ccs (org.apache.kafka.common.utils.AppInfoParser:119)[2022-12-03 17:23:21,459] INFO Kafka commitId: b1f098d4b86fb0e9bc6cc86da16ad16e8cd26ebd (org.apache.kafka.common.utils.AppInfoParser:120)[2022-12-03 17:23:21,459] INFO Kafka startTimeMs: 1670055801459 (org.apache.kafka.common.utils.AppInfoParser:121)[2022-12-03 17:23:21,459] INFO [Consumer clientId=consumer-connect-cluster-3, groupId=connect-cluster] Cluster ID: PMGnJFVHRbqJ0BHd__SNpg (org.apache.kafka.clients.Metadata:287)[2022-12-03 17:23:21,469] INFO [Consumer clientId=consumer-connect-cluster-3, groupId=connect-cluster] Subscribed to partition(s): connect-configs-0 (org.apache.kafka.clients.consumer.KafkaConsumer:1123)[2022-12-03 17:23:21,470] INFO [Consumer clientId=consumer-connect-cluster-3, groupId=connect-cluster] Seeking to EARLIEST offset of partition connect-configs-0 (org.apache.kafka.clients.consumer.internals.SubscriptionState:642)[2022-12-03 17:23:21,474] INFO [Consumer clientId=consumer-connect-cluster-3, groupId=connect-cluster] Resetting the last seen epoch of partition connect-configs-0 to 0 since the associated topicId changed from null to wzucMnPJSWa92v9021fiWg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,474] INFO [Consumer clientId=consumer-connect-cluster-3, groupId=connect-cluster] Resetting offset for partition connect-configs-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[dongjun:9092 (id: 0 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399)[2022-12-03 17:23:21,474] INFO Finished reading KafkaBasedLog for topic connect-configs (org.apache.kafka.connect.util.KafkaBasedLog:206)[2022-12-03 17:23:21,474] INFO Started KafkaBasedLog for topic connect-configs (org.apache.kafka.connect.util.KafkaBasedLog:208)[2022-12-03 17:23:21,474] INFO Started KafkaConfigBackingStore (org.apache.kafka.connect.storage.KafkaConfigBackingStore:306)[2022-12-03 17:23:21,474] INFO [Worker clientId=connect-1, groupId=connect-cluster] Herder started (org.apache.kafka.connect.runtime.distributed.DistributedHerder:322)[2022-12-03 17:23:21,474] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-0 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,474] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-5 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,474] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-10 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,474] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-20 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,474] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-15 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,474] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-9 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,474] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-11 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,474] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-4 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,474] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-16 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,474] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-17 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,474] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-3 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,487] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-24 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,488] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-23 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,490] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-13 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,493] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-18 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,493] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-22 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,493] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-8 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,493] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-2 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,493] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-12 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,493] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-19 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,493] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-14 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,496] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-1 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,496] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-6 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,496] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-7 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,497] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-offsets-21 to 0 since the associated topicId changed from null to x-Icc_YwRL6lK6kjGAUbdg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,497] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-status-0 to 0 since the associated topicId changed from null to ym5u2s8ySty4G9yhuA_gcQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,497] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-status-4 to 0 since the associated topicId changed from null to ym5u2s8ySty4G9yhuA_gcQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,497] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-status-1 to 0 since the associated topicId changed from null to ym5u2s8ySty4G9yhuA_gcQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,497] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-status-2 to 0 since the associated topicId changed from null to ym5u2s8ySty4G9yhuA_gcQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,498] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-status-3 to 0 since the associated topicId changed from null to ym5u2s8ySty4G9yhuA_gcQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,498] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition connect-configs-0 to 0 since the associated topicId changed from null to wzucMnPJSWa92v9021fiWg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:21,499] INFO [Worker clientId=connect-1, groupId=connect-cluster] Cluster ID: PMGnJFVHRbqJ0BHd__SNpg (org.apache.kafka.clients.Metadata:287)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-0 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-10 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-20 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-40 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-30 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-9 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-11 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-31 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-39 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-13 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-18 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-22 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-8 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-32 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-43 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-29 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-34 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-1 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-6 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-41 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-27 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-48 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-5 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-15 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-35 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-25 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-46 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-26 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-36 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-44 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-16 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-37 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-17 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-45 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-3 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,120] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-24 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,136] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-38 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,136] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-33 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,136] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-23 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,136] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-28 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,136] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-2 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,136] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-12 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,136] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-19 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,136] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-14 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,136] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-4 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,136] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-47 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,136] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-49 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,136] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-42 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,136] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-7 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,136] INFO [Worker clientId=connect-1, groupId=connect-cluster] Resetting the last seen epoch of partition __consumer_offsets-21 to 0 since the associated topicId changed from null to rqQbN08wSU-9QDn4uqdIdQ (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,136] INFO [Worker clientId=connect-1, groupId=connect-cluster] Discovered group coordinator dongjun:9092 (id: 2147483647 rack: null) (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:897)[2022-12-03 17:23:22,136] INFO [Worker clientId=connect-1, groupId=connect-cluster] Rebalance started (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:228)[2022-12-03 17:23:22,136] INFO [Worker clientId=connect-1, groupId=connect-cluster] (Re-)joining group (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:566)[2022-12-03 17:23:22,152] INFO [Worker clientId=connect-1, groupId=connect-cluster] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:1063)[2022-12-03 17:23:22,152] INFO [Worker clientId=connect-1, groupId=connect-cluster] (Re-)joining group (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:566)[2022-12-03 17:23:22,168] INFO [Worker clientId=connect-1, groupId=connect-cluster] Successfully joined group with generation Generation{generationId=1, memberId='connect-1-a29b010d-d427-4726-a713-0f887e1e81c1', protocol='sessioned'} (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:627)[2022-12-03 17:23:22,202] INFO [Worker clientId=connect-1, groupId=connect-cluster] Successfully synced group in generation Generation{generationId=1, memberId='connect-1-a29b010d-d427-4726-a713-0f887e1e81c1', protocol='sessioned'} (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:802)[2022-12-03 17:23:22,202] INFO [Worker clientId=connect-1, groupId=connect-cluster] Joined group at generation 1 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-a29b010d-d427-4726-a713-0f887e1e81c1', leaderUrl='http://192.168.43.212:8083/', offset=-1, connectorIds=[], taskIds=[], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1853)[2022-12-03 17:23:22,202] INFO [Worker clientId=connect-1, groupId=connect-cluster] Starting connectors and tasks using config offset -1 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1378)[2022-12-03 17:23:22,202] INFO [Worker clientId=connect-1, groupId=connect-cluster] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1406)[2022-12-03 17:23:22,227] INFO [Producer clientId=producer-3] Resetting the last seen epoch of partition connect-configs-0 to 0 since the associated topicId changed from null to wzucMnPJSWa92v9021fiWg (org.apache.kafka.clients.Metadata:402)[2022-12-03 17:23:22,245] INFO [Worker clientId=connect-1, groupId=connect-cluster] Session key updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1721)[2022-12-03 17:23:45,624] INFO JdbcSourceConnectorConfig values: batch.max.rows = 100 catalog.pattern = null connection.attempts = 3 connection.backoff.ms = 10000 connection.password = [hidden] connection.url = jdbc:mysql://mysql-sink:3306/mydb connection.user = root db.timezone = UTC dialect.name = incrementing.column.name = id mode = incrementing numeric.mapping = null numeric.precision.mapping = false poll.interval.ms = 5000 query = query.retry.attempts = -1 query.suffix = quote.sql.identifiers = ALWAYS schema.pattern = null table.blacklist = [] table.monitoring.startup.polling.limit.ms = 10000 table.poll.interval.ms = 60000 table.types = [TABLE] table.whitelist = [user_mydb] timestamp.column.name = [] timestamp.delay.interval.ms = 0 timestamp.granularity = connect_logical timestamp.initial = null topic.prefix = my_topic_ transaction.isolation.mode = DEFAULT validate.non.null = true (io.confluent.connect.jdbc.source.JdbcSourceConnectorConfig:376)[2022-12-03 17:23:45,626] INFO AbstractConfig values: (org.apache.kafka.common.config.AbstractConfig:376)[2022-12-03 17:23:45,636] INFO [Worker clientId=connect-1, groupId=connect-cluster] Connector my-source-connect config updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1677)[2022-12-03 17:23:45,637] INFO [Worker clientId=connect-1, groupId=connect-cluster] Rebalance started (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:228)[2022-12-03 17:23:45,637] INFO [Worker clientId=connect-1, groupId=connect-cluster] (Re-)joining group (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:566)[2022-12-03 17:23:45,640] INFO [Worker clientId=connect-1, groupId=connect-cluster] Successfully joined group with generation Generation{generationId=2, memberId='connect-1-a29b010d-d427-4726-a713-0f887e1e81c1', protocol='sessioned'} (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:627)[2022-12-03 17:23:45,645] INFO [Worker clientId=connect-1, groupId=connect-cluster] Successfully synced group in generation Generation{generationId=2, memberId='connect-1-a29b010d-d427-4726-a713-0f887e1e81c1', protocol='sessioned'} (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:802)[2022-12-03 17:23:45,646] INFO [Worker clientId=connect-1, groupId=connect-cluster] Joined group at generation 2 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-a29b010d-d427-4726-a713-0f887e1e81c1', leaderUrl='http://192.168.43.212:8083/', offset=2, connectorIds=[my-source-connect], taskIds=[], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1853)[2022-12-03 17:23:45,647] INFO [Worker clientId=connect-1, groupId=connect-cluster] Starting connectors and tasks using config offset 2 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1378)[2022-12-03 17:23:45,649] INFO [Worker clientId=connect-1, groupId=connect-cluster] Starting connector my-source-connect (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1455)[2022-12-03 17:23:45,651] INFO [my-source-connect|worker] Creating connector my-source-connect of type io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:264)[2022-12-03 17:23:45,652] INFO [my-source-connect|worker] SourceConnectorConfig values: config.action.reload = restart connector.class = io.confluent.connect.jdbc.JdbcSourceConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = my-source-connect predicates = [] tasks.max = 1 topic.creation.groups = [] transforms = [] value.converter = null (org.apache.kafka.connect.runtime.SourceConnectorConfig:376)[2022-12-03 17:23:45,652] INFO [my-source-connect|worker] EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.confluent.connect.jdbc.JdbcSourceConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = my-source-connect predicates = [] tasks.max = 1 topic.creation.groups = [] transforms = [] value.converter = null (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:376)[2022-12-03 17:23:45,655] INFO [my-source-connect|worker] Instantiated connector my-source-connect with version 10.6.0 of type class io.confluent.connect.jdbc.JdbcSourceConnector (org.apache.kafka.connect.runtime.Worker:274)[2022-12-03 17:23:45,655] INFO [my-source-connect|worker] Finished creating connector my-source-connect (org.apache.kafka.connect.runtime.Worker:299)[2022-12-03 17:23:45,655] INFO [Worker clientId=connect-1, groupId=connect-cluster] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1406)[2022-12-03 17:23:45,656] INFO [my-source-connect|worker] Starting JDBC Source Connector (io.confluent.connect.jdbc.JdbcSourceConnector:71)[2022-12-03 17:23:45,656] INFO [my-source-connect|worker] JdbcSourceConnectorConfig values: batch.max.rows = 100 catalog.pattern = null connection.attempts = 3 connection.backoff.ms = 10000 connection.password = [hidden] connection.url = jdbc:mysql://mysql-sink:3306/mydb connection.user = root db.timezone = UTC dialect.name = incrementing.column.name = id mode = incrementing numeric.mapping = null numeric.precision.mapping = false poll.interval.ms = 5000 query = query.retry.attempts = -1 query.suffix = quote.sql.identifiers = ALWAYS schema.pattern = null table.blacklist = [] table.monitoring.startup.polling.limit.ms = 10000 table.poll.interval.ms = 60000 table.types = [TABLE] table.whitelist = [user_mydb] timestamp.column.name = [] timestamp.delay.interval.ms = 0 timestamp.granularity = connect_logical timestamp.initial = null topic.prefix = my_topic_ transaction.isolation.mode = DEFAULT validate.non.null = true (io.confluent.connect.jdbc.source.JdbcSourceConnectorConfig:376)[2022-12-03 17:23:45,659] INFO [my-source-connect|worker] Attempting to open connection #1 to MySql (io.confluent.connect.jdbc.util.CachedConnectionProvider:79)[2022-12-03 17:23:48,015] INFO [my-source-connect|worker] Unable to connect to database on attempt 1/3. Will retry in 10000 ms. (io.confluent.connect.jdbc.util.CachedConnectionProvider:86)com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failureThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174) at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64) at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:829) at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:449) at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:242) at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:198) at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:683) at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:191) at io.confluent.connect.jdbc.dialect.GenericDatabaseDialect.getConnection(GenericDatabaseDialect.java:250) at io.confluent.connect.jdbc.util.CachedConnectionProvider.newConnection(CachedConnectionProvider.java:80) at io.confluent.connect.jdbc.util.CachedConnectionProvider.getConnection(CachedConnectionProvider.java:52) at io.confluent.connect.jdbc.JdbcSourceConnector.start(JdbcSourceConnector.java:94) at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:184) at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:209) at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:348) at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:331) at org.apache.kafka.connect.runtime.WorkerConnector.doRun(WorkerConnector.java:140) at org.apache.kafka.connect.runtime.WorkerConnector.run(WorkerConnector.java:117) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833)Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failureThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. at java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:67) at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499) at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:483) at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:61) at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:105) at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:151) at com.mysql.cj.exceptions.ExceptionFactory.createCommunicationsException(ExceptionFactory.java:167) at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:89) at com.mysql.cj.NativeSession.connect(NativeSession.java:120) at com.mysql.cj.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:949) at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:819) ... 20 moreCaused by: java.net.UnknownHostException: 알려진 호스트가 없습니다 (mysql-sink) at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Inet6AddressImpl.java:52) at java.base/java.net.InetAddress$PlatformResolver.lookupByName(InetAddress.java:1048) at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1638) at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:997) at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1628) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1494) at com.mysql.cj.protocol.StandardSocketFactory.connect(StandardSocketFactory.java:133) at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:63) ... 23 more[2022-12-03 17:23:58,026] INFO [my-source-connect|worker] Attempting to open connection #2 to MySql (io.confluent.connect.jdbc.util.CachedConnectionProvider:79)[2022-12-03 17:24:00,300] INFO [my-source-connect|worker] Unable to connect to database on attempt 2/3. Will retry in 10000 ms. (io.confluent.connect.jdbc.util.CachedConnectionProvider:86)com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failureThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174) at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64) at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:829) at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:449) at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:242) at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:198) at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:683) at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:191) at io.confluent.connect.jdbc.dialect.GenericDatabaseDialect.getConnection(GenericDatabaseDialect.java:250) at io.confluent.connect.jdbc.util.CachedConnectionProvider.newConnection(CachedConnectionProvider.java:80) at io.confluent.connect.jdbc.util.CachedConnectionProvider.getConnection(CachedConnectionProvider.java:52) at io.confluent.connect.jdbc.JdbcSourceConnector.start(JdbcSourceConnector.java:94) at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:184) at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:209) at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:348) at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:331) at org.apache.kafka.connect.runtime.WorkerConnector.doRun(WorkerConnector.java:140) at org.apache.kafka.connect.runtime.WorkerConnector.run(WorkerConnector.java:117) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833)Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failureThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. at java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:67) at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499) at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:483) at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:61) at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:105) at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:151) at com.mysql.cj.exceptions.ExceptionFactory.createCommunicationsException(ExceptionFactory.java:167) at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:89) at com.mysql.cj.NativeSession.connect(NativeSession.java:120) at com.mysql.cj.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:949) at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:819) ... 20 moreCaused by: java.net.UnknownHostException: 알려진 호스트가 없습니다 (mysql-sink) at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Inet6AddressImpl.java:52) at java.base/java.net.InetAddress$PlatformResolver.lookupByName(InetAddress.java:1048) at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1638) at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:997) at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1628) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1494) at com.mysql.cj.protocol.StandardSocketFactory.connect(StandardSocketFactory.java:133) at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:63) ... 23 moreconnect-distributed## # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## # This file contains some of the configurations for the Kafka Connect distributed worker. This file is intended # to be used with the examples, and some settings may differ from those used in a production system, especially # the `bootstrap.servers` and those specifying replication factors. # A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. bootstrap.servers=localhost:9092 # unique name for the cluster, used in forming the Connect cluster group. Note that this must not conflict with consumer group IDs group.id=connect-cluster # The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will # need to configure these based on the format they want their data in when loaded from or stored into Kafka key.converter=org.apache.kafka.connect.json.JsonConverter value.converter=org.apache.kafka.connect.json.JsonConverter # Converter-specific settings can be passed in by prefixing the Converter's setting with the converter we want to apply # it to key.converter.schemas.enable=true value.converter.schemas.enable=true # Topic to use for storing offsets. This topic should have many partitions and be replicated and compacted. # Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create # the topic before starting Kafka Connect if a specific topic configuration is needed. # Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value. # Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able # to run this example on a single-broker cluster and so here we instead set the replication factor to 1. offset.storage.topic=connect-offsets offset.storage.replication.factor=1 #offset.storage.partitions=25 # Topic to use for storing connector and task configurations; note that this should be a single partition, highly replicated, # and compacted topic. Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create # the topic before starting Kafka Connect if a specific topic configuration is needed. # Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value. # Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able # to run this example on a single-broker cluster and so here we instead set the replication factor to 1. config.storage.topic=connect-configs config.storage.replication.factor=1 # Topic to use for storing statuses. This topic can have multiple partitions and should be replicated and compacted. # Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create # the topic before starting Kafka Connect if a specific topic configuration is needed. # Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value. # Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able # to run this example on a single-broker cluster and so here we instead set the replication factor to 1. status.storage.topic=connect-status status.storage.replication.factor=1 #status.storage.partitions=5 # Flush much faster than normal, which is useful for testing/debugging offset.flush.interval.ms=10000 # List of comma-separated URIs the REST API will listen on. The supported protocols are HTTP and HTTPS. # Specify hostname as 0.0.0.0 to bind to all interfaces. # Leave hostname empty to bind to default interface. # Examples of legal listener lists: HTTP://myhost:8083,HTTPS://myhost:8084" #listeners=HTTP://:8083 # The Hostname & Port that will be given out to other workers to connect to i.e. URLs that are routable from other servers. # If not set, it uses the value for "listeners" if configured. #rest.advertised.host.name= #rest.advertised.port= #rest.advertised.listener= # Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins # (connectors, converters, transformations). The list should consist of top level directories that include # any combination of: # a) directories immediately containing jars with plugins and their dependencies # b) uber-jars with plugins and their dependencies # c) directories immediately containing the package directory structure of classes of plugins and their dependencies # Examples: # plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors, # plugin.path=/usr/share/java plugin.path=\C:\\confluentinc-kafka-connect-jdbc-10.6.0\\libconfluentinc-kafka-connect-jdbc-10.6.0\lib 밑에 mysql-connector-java-8.0.29 해당하는 파일을 넣었습니다.