Bhavesh,
I'd rephrase that a little bit. The new producer absolutely does allow the
user to use any partitioning strategy. However, the mirror maker currently
does not expose that functionality and uses only hash based partitioning.
It will be helpful to understand the specific use case for allow
With new producer it will still do the hash based partitioning based on
keys if the messages have keys. However it is a bit harder to customize
partitioning logic as the new producer do not expose the partitioner any
more.
Guozhang
On Mon, Aug 11, 2014 at 11:12 PM, Bhavesh Mistry wrote:
> Hi N
Hi Neha and Guozhang,
As long as stickiness is maintain consistently to a particular partition in
target DC that is great so we can do per DC and across DC aggregation.
How about non hash based instead of range based partitioning ? eg Key
start with "a" then send message to partition 1 to 10, i
Bhavesh,
As Neha said, with more partitions on the destination brokers, events that
are belong to the same partition in the source cluster may be distributed
to different partitions in the destination cluster.
Guozhang
On Mon, Aug 11, 2014 at 9:35 PM, Neha Narkhede
wrote:
> Bhavesh,
>
> For k
Bhavesh,
For keyed data, the mirror maker will just distribute data based on
hash(key)%num_partitions. If num_partitions is different in the target DC
(which it is), a message that lived in partition 0 in the source cluster
might end up in partition 10 in the target cluster.
Thanks,
Neha
On Mon
Hi Guozhang,
We are using Kafka 0.8.1 for all producer consumer and MM.
We have 32 partition in source (local) per DC and we have 100 in target
(Central) DC.
Is there any configuration on MM for this etc ?
Thanks,
Bhavesh
On Mon, Aug 11, 2014 at 4:33 PM, Guozhang Wang wrote:
> Hi Bhavesh,
Hi Bhavesh,
What is the number of partitions on the source and target clusters, and
what version of Kafka MM are you using?
Guozhang
On Mon, Aug 11, 2014 at 1:21 PM, Bhavesh Mistry
wrote:
> HI Kafka Dev Team,
>
>
>
> We have to aggregate events (count) per DC and across DCs for one of topic.
HI Kafka Dev Team,
We have to aggregate events (count) per DC and across DCs for one of topic.
We have standard Linked-in data pipe line producers --> Local Brokers -->
MM --> Center Brokers.
So I would like to know How MM handles messages when custom partitioning
logic is used as below and