Hi,

I'm trying to understand how Kafka Broker memory is impacted and leads to
more JVM GC when Large messages are sent to Kafka.


*Large messages can cause longer garbage collection (GC) pauses as brokers
allocate large chunks.*

Kafka is zero-copy; so messages do not *pass-through *JVM heap; implying no
usage of HeapByteBuffer.

My reasoning is: if there is no enough virtual memory available to allocate
buffer, it will trigger JVM GC (even when sufficient heap space is
available). So JVM GC behavior is a factor of amount of memory available to
Kafka broker (and max message size and number of partitions).

Is the above reasoning correct? Or do I miss something?
Is there some documentation (apart from code) explaining how buffer
allocation is done in Kafka Broker?

Regards,
Aparna

Reply via email to