[ https://issues.apache.org/jira/browse/KAFKA-7149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16948413#comment-16948413 ]
Ashish Surana commented on KAFKA-7149: -------------------------------------- I think along with these encoding changes we are also compressing the data before sending out the assignments. If producer is already sending compressed data then compressing again while writing to __offsets topic won't help but instead if it could decrease the performance if the compression algorithm is different at producer and __offset topic. I am not sure it's good to add compression for __offset topic here. > Reduce assignment data size to improve kafka streams scalability > ---------------------------------------------------------------- > > Key: KAFKA-7149 > URL: https://issues.apache.org/jira/browse/KAFKA-7149 > Project: Kafka > Issue Type: Improvement > Components: streams > Affects Versions: 2.0.0 > Reporter: Ashish Surana > Assignee: Vinoth Chandar > Priority: Major > Fix For: 2.4.0 > > > We observed that when we have high number of partitions, instances or > stream-threads, assignment-data size grows too fast and we start getting > below RecordTooLargeException at kafka-broker. > Workaround of this issue is commented at: > https://issues.apache.org/jira/browse/KAFKA-6976 > Still it limits the scalability of kafka streams as moving around 100MBs of > assignment data for each rebalancing affects performance & reliability > (timeout exceptions starts appearing) as well. Also this limits kafka streams > scale even with high max.message.bytes setting as data size increases pretty > quickly with number of partitions, instances or stream-threads. > > Solution: > To address this issue in our cluster, we are sending the compressed > assignment-data. We saw assignment-data size reduced by 8X-10X. This improved > the kafka streams scalability drastically for us and we could now run it with > more than 8,000 partitions. -- This message was sent by Atlassian Jira (v8.3.4#803005)