Several features in Zookeeper depend on server time. I would highly recommend that you properly setup ntpd (or whatever), then try to reproduce.
-Jon On Jan 2, 2015, at 2:35 PM, Birla, Lokesh <lokesh.bi...@verizon.com> wrote: > We don¹t see zookeeper expiration. However I noticed that our servers > system time is NOT synced. Hence server1 and server2 had 30+sec delay. Do > you think that could cause leadership change or any other issue. > > On 12/31/14, 4:03 PM, "Jun Rao" <j...@confluent.io> wrote: > >> A typical cause of frequent leadership changes is GC-induced soft failure. >> Do you see ZK session expiration on the broker? If so, you may want to >> enable GC log to see the GC time. >> >> Thanks, >> >> Jun >> >> On Tue, Dec 23, 2014 at 2:06 PM, Birla, Lokesh <lokesh.bi...@verizon.com> >> wrote: >> >>> >>> I was already using 4GB heap memory. I even changed to 8 GB heap memory >>> and could see leadership changing very often. In my 5 minute run, I saw >>> leadership changed from 1,2,3 to 3,3,3, to 1,1,1. >>> Also my message rate is just: 7k and total msg count is only 2,169,001. >>> >>> Does anyone has cline on leadership change? >>> >>> ‹Lokesh >>> >>> >>> >>> From: Thunder Stumpges <tstump...@ntent.com<mailto:tstump...@ntent.com>> >>> Date: Monday, December 22, 2014 at 6:31 PM >>> To: "users@kafka.apache.org<mailto:users@kafka.apache.org>" < >>> users@kafka.apache.org<mailto:users@kafka.apache.org>> >>> Cc: "Birla, Lokesh" <lokesh.bi...@one.verizon.com<mailto: >>> lokesh.bi...@one.verizon.com>> >>> Subject: RE: Kafka 0.8.1.1 eadership changes are happening very often >>> >>> Did you check the GC logs in the server? We ran into this and the >>> default >>> setting of 1G max heap on the broker process was nowhere near enough. We >>> currently have set to 4G. >>> -T >>> >>> -----Original Message----- >>> From: Birla, Lokesh [lokesh.bi...@verizon.com<mailto: >>> lokesh.bi...@verizon.com>] >>> Received: Monday, 22 Dec 2014, 5:27PM >>> To: users@kafka.apache.org<mailto:users@kafka.apache.org> [ >>> users@kafka.apache.org<mailto:users@kafka.apache.org>] >>> CC: Birla, Lokesh [lokesh.bi...@verizon.com<mailto: >>> lokesh.bi...@verizon.com>] >>> Subject: Kafka 0.8.1.1 eadership changes are happening very often >>> >>> Hello, >>> >>> I am running 3 brokers, one zookeeper and producer all on separate >>> machine. I am also sending very low load around 6K msg/sec. Each msg is >>> around 150 bytes only. >>> I ran the load for only 5 minutes and during this time, I see leadership >>> chained very often. >>> >>> I created 3 partitions. >>> >>> Here leadership for each partitions changed. All 3 brokers are running >>> perfectly fine. No broker is down. Could someone let me know why kafka >>> leadership changed very often. >>> >>> Initially: >>> >>> Topic:mmetopic1PartitionCount:3 ReplicationFactor:3 Configs: >>> >>> Topic: mmetopic1Partition: 0 Leader: 2Replicas: 2,3,1 Isr: 2,3,1 >>> >>> Topic: mmetopic1Partition: 1 Leader: 3Replicas: 3,1,2 Isr: 3,1,2 >>> >>> Topic: mmetopic1Partition: 2 Leader: 1Replicas: 1,2,3 Isr: 1,2,3 >>> >>> >>> Changed to: >>> >>> >>> Topic:mmetopic1PartitionCount:3 ReplicationFactor:3 Configs: >>> >>> Topic: mmetopic1Partition: 0 Leader: 3Replicas: 2,3,1 Isr: 3,1,2 >>> >>> Topic: mmetopic1Partition: 1 Leader: 3Replicas: 3,1,2 Isr: 3,1,2 >>> >>> Topic: mmetopic1Partition: 2 Leader: 1Replicas: 1,2,3 Isr: 1,3,2 >>> >>> >>> Changed to: >>> >>> >>> Topic:mmetopic1PartitionCount:3 ReplicationFactor:3 Configs: >>> >>> Topic: mmetopic1Partition: 0 Leader: 1Replicas: 2,3,1 Isr: 1,2,3 >>> >>> Topic: mmetopic1Partition: 1 Leader: 1Replicas: 3,1,2 Isr: 1,2,3 >>> >>> Topic: mmetopic1Partition: 2 Leader: 2Replicas: 1,2,3 Isr: 2,1,3 >>> >>> Changed to: >>> >>> >>> Topic:mmetopic1PartitionCount:3 ReplicationFactor:3 Configs: >>> >>> Topic: mmetopic1Partition: 0 Leader: 3Replicas: 2,3,1 Isr: 3,1,2 >>> >>> Topic: mmetopic1Partition: 1 Leader: 3Replicas: 3,1,2 Isr: 3,1,2 >>> >>> Topic: mmetopic1Partition: 2 Leader: 1Replicas: 1,2,3 Isr: 1,3,2 >>> >>> >>> Thanks, >>> Lokesh >>> >
signature.asc
Description: Message signed with OpenPGP using GPGMail