d then key will be used as partKey by default.
>
> So for your case, if you do
>
> new KeyedMessage[String, String](topic, null /*key*/, partkey, value)
>
> then the partkey will be used to determine the partition but not written to
> broker.
>
> Guozhang
>
> On Wed
alue);
>
> Then the key in the KeyedMessage will be null.
>
> Hope this helps!
>
> Thanks,
> Liquan
>
> On Tue, Jun 23, 2015 at 8:18 AM, Mohit Kathuria
> wrote:
>
> > Hi,
> >
> > We are using kafka 0.8.1.1 in our production cluster. I recently starte
, just out of anxiety, I want to know whether we can turn off
writing the key to the broker. Any configuration I can change to achieve
this?
-Thanks,
Mohit Kathuria
> > > Has anyone else noticed this? If so, are there any patches people have
> > > written. Once we have a clearer picture of solutions we'll send a few
> > > patches in JIRAs.
> > >
> > > Best,
> > > Mike
> > >
> > > 1:
>
Any suggestions what might be going on here. We are very much blinded here
and our application is getting effected due to this.
-Mohit
On Tue, Dec 9, 2014 at 8:41 PM, Mohit Kathuria
wrote:
>
> Neha,
>
> The same issue reoccured with just 2 consumer processes. The exception was
his is the case, you can
> run the wchp zookeeper command on the zk leader and check if each consumer
> has a watch registered.
>
> Do you have a way to try this on zk 3.3.4? I would recommend you try the
> wchp suggestion as well.
>
> On Fri, Nov 7, 2014 at 6:07 AM, Mohit Kath
Hi all,
Can someone help here. We are getting constant rebalance failure each time
a consumer is added beyond a certain number. Did quite a lot of debugging
on this and still not able to figure out the pattern.
-Thanks,
Mohit
On Mon, Nov 3, 2014 at 10:53 PM, Mohit Kathuria
wrote:
> N
em?
>
> Thanks,
> Neha
>
> On Mon, Oct 20, 2014 at 6:15 AM, Mohit Kathuria
> wrote:
>
> > Dear Experts,
> >
> > We recently updated to kafka v0.8.1.1 with zookeeper v3.4.5. I have of
> > topic with 30 partitions and 2 replicas. We are using High level co
Neha,
Looks like an issue with the consumer rebalance not able to complete
successfully. We were able to reproduce the issue on topic with 30
partitions, 3 consumer processes(p1,p2 and p3), properties - 40
rebalance.max.retries and 1(10s) rebalance.backoff.ms.
Before the process p3 was star
Dear Experts,
We recently updated to kafka v0.8.1.1 with zookeeper v3.4.5. I have of
topic with 30 partitions and 2 replicas. We are using High level consumer
api.
Each consumer process which is a storm topolofy has 5 streams which
connects to 1 or more partitions. We are not using storm's inbuilt
10 matches
Mail list logo