해결된 질문
작성
·
972
·
수정됨
0
안녕하세요.
강의 잘 보고 있습니다.
제가 회사에서 강의를 보고 있어서 그런데 강의 세팅과 조금 다르게 진행해서 연결에서 막힙니다.
일단 저는, 개인 PC로 IP - 192.168.100.170 인 서버 컴퓨터로 원격 연결을 하고
그 안에서 VB로 ubuntu VM을 생성했습니다.
VM의 고정 IP는 192.168.88.111로 설정했습니다.
이후 편한 환경을 위해 putty같은 프로그램으로 ssh 연결을 했습니다.
VM의 Port Forwarding으로
ssh는 192.168.100.170:27722 -> 192.168.88.111:22
192.168.100.170:29092 -> 192.168.88.111:9092 으로 진행했고 성공했습니다.
이후 개인 PC에서 Intelij로 SimpleProducer 실습을 진행하는데,
props.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.100.170:29092");
로 나름 머리를 써서 작성했습니다.
물론, VM의 server.properties에서 외부 연결을 허용하도록 했습니다만, 정확한지 확신은 없습니다.
이후, 코드를 실행했더니, socket timeout 에러가 나오고 카프카 컨슈머에 들어오지 않았습니다.
Log를 자세히 보니 분명히 kafka topicId를 인지하는 걸 보니 연결은 된 것 같은데 뭐가 문제인지 모르겠습니다.
Starting Gradle Daemon...
Gradle Daemon started in 1 s 324 ms
> Task :producers:compileJava UP-TO-DATE
> Task :producers:processResources NO-SOURCE
> Task :producers:classes UP-TO-DATE
> Task :producers:SimpleProducer.main()
[main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
acks = -1
batch.size = 16384
bootstrap.servers = [192.168.100.170:39092]
buffer.memory = 33554432
client.dns.lookup = use_all_dns_ips
client.id = producer-1
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = true
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metadata.max.idle.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.connect.timeout.ms = null
sasl.login.read.timeout.ms = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.login.retry.backoff.max.ms = 10000
sasl.login.retry.backoff.ms = 100
sasl.mechanism = GSSAPI
sasl.oauthbearer.clock.skew.seconds = 30
sasl.oauthbearer.expected.audience = null
sasl.oauthbearer.expected.issuer = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
sasl.oauthbearer.jwks.endpoint.url = null
sasl.oauthbearer.scope.claim.name = scope
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.token.endpoint.url = null
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.1.0
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 37edeed0777bacb3
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1706742127571
[kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.Metadata - [Producer clientId=producer-1] Resetting the last seen epoch of partition test-topic-0 to 0 since the associated topicId changed from null to jRkpHnfwT8mfWJ3PB9HHmg
[kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.Metadata - [Producer clientId=producer-1] Cluster ID: ysNHdh2DQTKvR3X0yruxdg
[kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Disconnecting from node 0 due to socket connection setup timeout. The timeout value is 9728 ms.
[kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Disconnecting from node 0 due to socket connection setup timeout. The timeout value is 18153 ms.
[kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Node 0 disconnected.
[kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Connection to node 0 (/192.168.88.111:9092) could not be established. Broker may not be available.
[kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Node 0 disconnected.
[kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Connection to node 0 (/192.168.88.111:9092) could not be established. Broker may not be available.
[kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Node 0 disconnected.
[kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Connection to node 0 (/192.168.88.111:9092) could not be established. Broker may not be available.
[kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Node 0 disconnected.
[kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Connection to node 0 (/192.168.88.111:9092) could not be established. Broker may not be available.
[main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
[main] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed
[main] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter
[main] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed
[main] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.producer for producer-1 unregistered
Deprecated Gradle features were used in this build, making it incompatible with Gradle 9.0.
You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.
For more on this, please refer to https://docs.gradle.org/8.4/userguide/command_line_interface.html#sec:command_line_warnings in the Gradle documentation.
BUILD SUCCESSFUL in 2m 5s
2 actionable tasks: 1 executed, 1 up-to-date
오전 8:04:07: Execution finished ':producers:SimpleProducer.main()'.
[Producer clientId=producer-1] Resetting the last seen epoch of partition test-topic-0 to 0 since the associated topicId changed from null to jRkpHnfwT8mfWJ3PB9HHmg
이 부분을 보아하니 토픽은 인지하는 것 같은데 말이죠..
감사합니다.
답변 2
1
안녕하십니까,
오, 문제를 벌써 해결하셨군요.
서버 설정에 조예가 있으셔서 실습 환경을 다르게 설정하신것 같군요(아님 대용량 서버에서 카프카를 테스트 해보고 싶으셨거나..)
강의 실습은 동일한 네트워크상에서 클라이언트와 카프카 브로커가 있다는 가정하에서 실습이 됩니다.
만약에 서버 컴퓨터가 개인 PC와 다른 네트워크로 구성되어 있다면 아래 적어 주신대로 advertised.listeners 설정을 해주셔야 합니다.
감사합니다.
0
advertised.listeners=PLAINTEXT://192.168.100.170:39092
해당 설정 코드를 Virtual box를 실행하는 머신의 IP 주소로 설정해야 한다는 것을 깨달았습니다.
그래야 개인 PC -> 서버 PC -> VM으로 연결이 되기 때문인 듯 합니다.
안녕하세요. 좋은 강의 덕분에 Kafka에 대해서 자세히 익히는 중입니다. 🙂
이후에 궁금한 사항이 생기면 질문을 작성해보겠습니다.
감사합니다 ^^