I ran into this problem as well Prashant. The default partition key was
recently changed:
https://github.com/apache/kafka/commit/b71e6dc352770f22daec0c9a3682138666f032be
It no longer assigns a random partition to data with a null partition key.
I had to change my code to generate random partiti
So I've run into a problem where occasionally, some partitions within a
topic end up in a "none" owner state for a long time.
I'm using the high-level consumer on several machines, each consumer has 4
threads.
Normally when I run the ConsumerOffsetChecker, all partitions have owners
and similar l
37 PM, Guozhang Wang wrote:
> Hello Drew,
>
> Do you see any rebalance failure exceptions in the consumer log?
>
> Guozhang
>
>
> On Mon, Nov 18, 2013 at 2:14 PM, Drew Goya wrote:
>
> > So I've run into a problem where occasionally, some partitions within a
>
Also of note, this is all running from within a storm topology, when I kill
a JVM, another is started very quickly.
Could this be a problem with a consumer leaving and rejoining within a
small window?
On Mon, Nov 18, 2013 at 2:52 PM, Drew Goya wrote:
> Hey Guozhang, I just forced the error
low up here if I can replicate the error with
clean ZK data.
On Mon, Nov 18, 2013 at 3:10 PM, Guozhang Wang wrote:
> Could you find some entries in the log with the key word "conflict"? If yes
> could you paste them here?
>
> Guozhang
>
>
> On Mon, Nov 18, 2013 at 2:5
I will have to give that a try as well. I have been having a real tough
time with the tool in 0.8.0. It fails frequently and I have to roll
restarts on my brokers to get partial changes to stick.
It should come with a warning! =)
On Fri, Dec 13, 2013 at 2:03 PM, Neha Narkhede wrote:
> It is H
I've been running into an issue with the 0.8.2.1 new producer for a few
weeks now and I haven't been able to figure it out. Hopefully someone on
the list can help!
First off my producer config looks like this:
props.put(ProducerConfig.ACKS_CONFIG, "1")
props.put(ProducerConfig.RETRIES_CO
NETWORK_EXCEPTION
> >
> > What did you see after? Especially once the network issue was resolved?
> > more retries? was there any successful sends?
> > Producers blocking for a while is expected, but once the issue is
> resolved
> > we expect the retries to succes
So I'm going to be going through the process of upgrading a cluster from
0.8.0 to the trunk (0.8.1).
I'm going to be expanding this cluster several times and the problems with
reassigning partitions in 0.8.0 mean I have to move to trunk(0.8.1) asap.
Will it be safe to roll upgrades through the cl
Hey all,
I've recently been having problems with consumer groups rebalancing. I'm
using several high level consumers which all belong to the same group.
Occasionally one or two of them will get stuck in a rebalance loop. They
attempt to rebalance, but the partitions they try to claim are owned.
://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whyaretheremanyrebalancesinmyconsumerlog
> ?
>
> Thanks,
>
> Jun
>
>
> On Tue, Dec 17, 2013 at 9:24 AM, Drew Goya wrote:
>
> > Hey all,
> >
> > I've recently been having problems with consumer groups reba
rm -r
On Tue, Dec 17, 2013 at 10:42 AM, Neha Narkhede wrote:
> There are no compatibility issues. You can roll upgrades through the
> cluster one node at a time.
>
> Thanks
> Neha
>
>
> On Tue, Dec 17, 2013 at 9:15 AM, Drew Goya wrote:
>
> > So I'm go
rod-storm-sup-trk007 doing at the same time?
> It's the one that caused the conflict in ZK.
>
> Thanks,
>
> Jun
>
>
> On Tue, Dec 17, 2013 at 9:19 PM, Drew Goya wrote:
>
> > I explored that possibility but I'm not seeing any ZK session expirations
> >
We migrated from 0.8.0 to 0.8.1 last week. We have a 15 broker cluster so
it took a while to roll through them one by one. Once I finished I was
finally able to complete a partition reassignment. I also had to do some
manual cleanup, but Neha says it will be fixed soon:
https://issues.apache.or
nflict with consumer 006. Consumer 007 should
> have another ZK watcher fired to trigger another rebalance when if it will
> see consumer 006. Which version of ZK are you using?
>
> Thanks,
>
> Jun
>
>
> On Wed, Dec 18, 2013 at 9:38 AM, Drew Goya wrote:
>
>
> Jun
>
>
> On Thu, Dec 19, 2013 at 9:36 PM, Drew Goya wrote:
>
> > We migrated from 0.8.0 to 0.8.1 last week. We have a 15 broker cluster
> so
> > it took a while to roll through them one by one. Once I finished I was
> > finally able to complete a partition rea
ote:
> Hi Drew,
>
> That problem will be fixed by
> https://issues.apache.org/jira/browse/KAFKA-1074. I think we are close to
> checking that in to trunk.
>
> Thanks,
> Neha
>
>
> On Wed, Dec 18, 2013 at 9:02 AM, Drew Goya wrote:
>
> > Thanks Neha, I rolled
is is the commit where it changed:
https://github.com/apache/kafka/commit/51de7c55d2b3107b79953f401fc8c9530bd0eea0
On Mon, Dec 23, 2013 at 10:09 AM, Neha Narkhede wrote:
> Are you hard killing the brokers? And is this issue reproducible?
>
>
> On Sat, Dec 21, 2013 at 11:39 AM,
zhang Wang;
reviewed by Joel Koshy and Jun Rao
On Mon, Dec 23, 2013 at 9:59 AM, Jun Rao wrote:
> Did you hard kill the broker? If so, do you have the fix for KAFKA-1112?
>
> Thanks,
>
> Jun
>
>
> On Fri, Dec 20, 2013 at 4:05 PM, Drew Goya wrote:
>
> > This is t
ions.
>
> Thanks,
>
> Jun
>
>
> On Thu, Dec 19, 2013 at 9:41 PM, Drew Goya wrote:
>
> > Our cluster is currently running 3.4.4.
> >
> > I see Kafka is currently using the 3.3.4 client, is there a potential
> > conflict there?
> >
> >
>
Wang wrote:
> Hi Drew,
>
> I tried the kafka-server-stop script and it worked for me. Wondering which
> OS are you using?
>
> Guozhang
>
>
> On Mon, Dec 23, 2013 at 10:57 AM, Drew Goya wrote:
>
> > Occasionally I do have to hard kill brokers, the kafka-server-st
c 23, 2013 at 2:50 PM, Drew Goya wrote:
> We are running on an Amazon Linux AMI, this is our specific version:
>
> Linux version 2.6.32-220.23.1.el6.centos.plus.x86_64 (
> mockbu...@c6b5.bsys.dev.centos.org) (gcc version 4.4.6 20110731 (Red Hat
> 4.4.6-3) (GCC) ) #1 SMP Tue Jun 19
the latest
> trunk?
>
> Thanks,
>
> Jun
>
>
> On Mon, Dec 23, 2013 at 3:21 PM, Drew Goya wrote:
>
> > Hey All, another thing to report for my 0.8.1 migration. I am seeing
> these
> > errors occasionally right after a I run a leader election. This look
incident. But I'd be
> interested to know whether we confirm that there are known problems with
> this!
>
> Jason
>
>
> On Mon, Dec 23, 2013 at 2:04 PM, Drew Goya wrote:
>
> > Thanks, I migrated our ZK cluster over to 3.3 this weekend. Hopefully
> that
> &g
set of partitions and consumers as host061
> does?
>
> Thanks,
>
> Jun
>
>
> On Sat, Feb 1, 2014 at 5:53 PM, Drew Goya wrote:
>
> > Hey all, this issue has recently popped up again. I've got a member of a
> > consumer group stuck in a rebalance loop. It attem
view of the topology though:
Rebalance output for host 61: http://pastebin.com/MP2nfExR
Rabalance output for host 62: http://pastebin.com/0jhmM4L2
Could there be a stale Zookeeper connection holding on to an ephemeral node?
On Mon, Feb 3, 2014 at 10:49 AM, Drew Goya wrote:
> That is anot
Hey all, do you guys have any plans to enhance the topic reassignment tool?
I've had to grow my cluster a couple times and getting an existing topics
partition replicas balanced out to the new brokers really sucks. I have to
describe the topic, awk the output to get it in the json format, then
ma
Just tried my first topic delete today and it looks like something went
wrong on the controller. I issued the command on a test topic and shortly
after that a describe looked like:
Topic:TimeoutQueueTest PartitionCount:256 ReplicationFactor:3 Configs:
Topic: TimeoutQueueTest Partition: 0 Leader:
This just hit me this morning as well, any news on 0.8.1.1? My ops guy is
going to kill me, we just rolled off my older build of 0.8.1 to the
official release.
On Thu, Apr 3, 2014 at 11:55 PM, Krzysztof Ociepa <
ociepa.krzysz...@gmail.com> wrote:
> Hi Guozhang,
> Hi Neha,
>
> Thanks a lot for y
I ran into this problem while restarting brokers while running 0.8.1. This
was usually a sign that you ran into KAFKA-1311.
I've since rolled upgrades to the latest on the 0.8.1 branch (0.8.1.1) and
my problems have gone away
On Thu, Apr 24, 2014 at 6:07 PM, Guozhang Wang wrote:
> Hi Sadhan,
A few things I've learned:
1) Don't break things up into separate topics unless the data in them is
truly independent. Consumer behavior can be extremely variable, don't
assume you will always be consuming as fast as you are producing.
2) Keep time related messages in the same partition. Again
31 matches
Mail list logo