s,
>
> Mayuresh
>
> On Tue, Jul 21, 2015 at 10:53 AM, Pranay Agarwal >
> wrote:
>
> > Any ideas?
> >
> > On Mon, Jul 20, 2015 at 2:34 PM, Pranay Agarwal <
> agarwalpran...@gmail.com>
> > wrote:
> >
> > > Hi all,
> > >
> &
Any ideas?
On Mon, Jul 20, 2015 at 2:34 PM, Pranay Agarwal
wrote:
> Hi all,
>
> Is there any way I can force Zookeeper/Kafka to rebalance new consumers
> only for subset of total number of partitions. I have a situation where out
> of 120 partitions 60 have been already co
Hi all,
Is there any way I can force Zookeeper/Kafka to rebalance new consumers
only for subset of total number of partitions. I have a situation where out
of 120 partitions 60 have been already consumed, but the zookeeper also
assigns these empty/inactive partitions as well for the re-balancing,
s, you are limited
> to 8 per server (probably less because there are other stuff on the
> server).
>
> Gwen
>
> On Mon, Jan 19, 2015 at 3:06 PM, Pranay Agarwal
> wrote:
> > Thanks a lot Natty.
> >
> > I am using this Ruby gem on the client side with all the
t;
> Jonathan "Natty" Natkins
> StreamSets | Customer Engagement Engineer
> mobile: 609.577.1600 | linkedin <http://www.linkedin.com/in/nattyice>
>
>
> On Mon, Jan 19, 2015 at 2:34 PM, Pranay Agarwal
> wrote:
>
> > Thanks Natty.
> >
> > Is there
on your brokers or to decrease your max fetch size.
>
> Thanks,
> Natty
>
> Jonathan "Natty" Natkins
> StreamSets | Customer Engagement Engineer
> mobile: 609.577.1600 | linkedin <http://www.linkedin.com/in/nattyice>
>
>
> On Mon, Jan 19, 20
Hi All,
I have a kafka cluster setup which has 2 topics
topic1 with 10 partitions
topic2 with 1000 partitions.
While, I am able to consume messages from topic1 just fine, I get following
error from the topic2. There is a resolved issue here on the same thing
https://issues.apache.org/jira/browse