We have many production clusters with three topics in the 1-3MB range and the
rest in the multi-kb to sub-kb range. We do use gzip compression, implemented
at the broker rather than the producer level. The clusters don’t usually break
a sweat. We use MirrorMaker to aggregate these topics to a la
We have a use case where we want to produce data to kafka with max
size of 2 MB rarely (That is, based on user operations message size
will vary).
Whether producing 2 Mb size will have any impact or we need to split
the message to small chunk such as 100 KB and produce.
If we produce into small c
Hi
The above KIP broke our graphs when we moved from 1.1 to 2.1. I can see
that this has been mentioned in the Release Notes. We were using Java
client to aggregate metrics using mbean, but now the same code cannot work
even after we provide the version string as mentioned here:
https://cwiki.apa
We have a use case where we want to produce data to kafka with max
size of 2 MB rarely (That is, based on user operations message size
will vary).
Whether producing 2 Mb size will have any impact or we need to split
the message to small chunk such as 100 KB and produce.
If we produce into small c
Hi, Franz!
I guess, one of the reasons could be additional safety in case of network split.
It is also some probability of bugs even with good software. So, If we place MM
on source cluster and network will split, consumers could (theoretically)
continue to read messages from source cluster and
Hi Ankur
On 3/13/19 3:34 AM, Ankur Rana wrote:
> Hey,
> I think this is a known issue in Kafka 2.1.0. Check this out
> https://issues.apache.org/jira/browse/KAFKA-7697
> It has been fixed in 2.1.1.
This surely does look like our issue! I should have found that myself..
Thanks, we'll roll out 2.1