Hello,
I am trying to distribute work across several nodes using Kafka. I have 3
brokers each with 16 partitions. I have 8 worker servers listening with one
message stream on the same topic. I expect each server to own about 1/8 of
the partitions, yet I am not seeing this. It seems initially, the
the ZK version? You should use 3.3.4 or above, which
> fixed some bugs that could cause rebalance to fail.
>
> Thanks,
>
> Jun
>
> On Wed, Dec 12, 2012 at 11:17 PM, David Ross wrote:
>
> > Hello,
> >
> > I am trying to distribute work across several
Hello,
We have found that, for our application, having a number of total
partitions as a multiple of the number of consumer hosts is beneficial.
Because of this, whenever we add or remove consumer hosts, we have to
change the number of partitions in the server config.
What are best practices for
itions grows with brokers. This is going to change
> in 0.8, in which # of partitions is specified at topic creation time and
> won't change as brokers change. One needs to use an admin DDL to change #
> of partitions.
>
> Thanks,
>
> Jun
>
> On Mon, Jan 7, 2013 at 10:
my iPhone
>
> On Jan 8, 2013, at 2:24 PM, David Ross wrote:
>
> > Yeah that makes sense, but what if we do need to change the number of
> > partitions? What if we need to reduce it?
> >
> > On Tue, Jan 8, 2013 at 12:42 PM, Jun Rao wrote:
> >
> >>
Hello,
We use Kafka to distribute batches work across several boxes. These batches
may take anywhere from 1 hour to 24 hours to complete. Currently, we have N
partitions, each allocated to one of N consumer worker boxes.
We find that as the batch nears completion, with only M <
N partitions still