76
>>
>> # The maximum size of a request that the socket server will accept
>> (protection against OOM)
>> socket.request.max.bytes=104857600
>>
>>
>> # Hostname the broker will bind to. If not set, the server will bind to
>> all interfaces
>> #host.name=kafka-1
>>
>> # Hostname the broker will advertise to producers and consumers. If not
>> set, it uses the
>> # value for "host.
afka-1
>
> # Hostname the broker will advertise to producers and consumers. If not
> set, it uses the
> # value for "host.name" if configured. Otherwise, it will use the value
> returned from
> # java.net.InetAddress.getCanonicalHostName().
> advertised.host.name=kafka-1.qa.ciq-internal.net
>
> # The port to publish to ZooKeeper for clients to use. If this is not set,
> # it will publi
tised.host.name=kafka-1.qa.ciq-internal.net
# The port to publish to ZooKeeper for clients to use. If this is not set,
# it will publish the same port that the broker binds to.
advertised.port=9092
Thank you again for your help.
CJ
_____________
Hello CJ,
You have to set the fetch size to be >= the maximum message size possible,
otherwise the consumption will block upon encountering these large messages.
I am wondering by saying "poor performance" what do you mean exactly? Are
you seeing low throughput, and can you share your consumer c
Hi CJ,
I recently ran into some kafka message size related issue and did some
digging around to understand the system. I will put those details in brief
and hope it will help you.
Each consumer connector has fetcher threads and fetcher manager threads
associated with it. The Fetcher thread talks t
Hi Kafka team,
We have a use case where we need to consume from ~20 topics (each with 24
partitions), we have a potential max message size of 20MB so we've set our
consumer fetch.size to 20MB but that's causing very poor performance on our
consumer (most of our messages are in the 10-100k ran