Re: Help Needed - Message Loss for Consumer groups in 2.3.0 (client 2.2.0)

2019-10-21 Thread M. Manna
01:13, M. Manna wrote: > Hello, > > I have recently had some message loss for a consumer group under kafka > 2.3.0. > > The client I am using is still in 2.2.0. Here is how the problem can be > reproduced, > > 1) The messages were sent to 4 consumer groups, 3 of them were

Help Needed - Message Loss for Consumer groups in 2.3.0 (client 2.2.0)

2019-10-20 Thread M. Manna
Hello, I have recently had some message loss for a consumer group under kafka 2.3.0. The client I am using is still in 2.2.0. Here is how the problem can be reproduced, 1) The messages were sent to 4 consumer groups, 3 of them were live and 1 was down 2) When the consumer group came back online

Re: Kafka Streaming message loss

2016-11-21 Thread Michael Noll
nks >> Eno >> > On 18 Nov 2016, at 13:49, Ryan Slade wrote: >> > >> > Hi >> > >> > I'm trialling Kafka Streaming for a large stream processing job, however >> > I'm seeing message loss even in the simplest scenarios. >> >

Re: Kafka Streaming message loss

2016-11-21 Thread Michael Noll
t; Eno > > On 18 Nov 2016, at 13:49, Ryan Slade wrote: > > > > Hi > > > > I'm trialling Kafka Streaming for a large stream processing job, however > > I'm seeing message loss even in the simplest scenarios. > > > > I've tried to boil

Re: Kafka Streaming message loss

2016-11-18 Thread Eno Thereska
2016, at 13:49, Ryan Slade wrote: > > Hi > > I'm trialling Kafka Streaming for a large stream processing job, however > I'm seeing message loss even in the simplest scenarios. > > I've tried to boil it down to the simplest scenario where I see loss which >

Kafka Streaming message loss

2016-11-18 Thread Ryan Slade
Hi I'm trialling Kafka Streaming for a large stream processing job, however I'm seeing message loss even in the simplest scenarios. I've tried to boil it down to the simplest scenario where I see loss which is the following: 1. Ingest messages from an input stream (String, St

Re: Message loss with kafka 0.8.2.2

2016-06-17 Thread Tom Crayford
one, you wont see any error > >in the producer, cause it succeeded in sending it to the broker. You most > >likely will see some error on the broker, because it is not the leader. > > > >On Fri, Jun 17, 2016 at 5:19 AM Gulia, Vikram > > >wrote: > > > >>

Re: Message loss with kafka 0.8.2.2

2016-06-17 Thread Gulia, Vikram
the leader. > >On Fri, Jun 17, 2016 at 5:19 AM Gulia, Vikram >wrote: > >> Hi Users, I am facing message loss while using kafka v 0.8.2.2. Please >>see >> details below and help me if you can. >> >> Issue: 2 messages produced to same partition one by one ­ Kafka

Re: Message loss with kafka 0.8.2.2

2016-06-16 Thread Gerard Klijs
cause it is not the leader. On Fri, Jun 17, 2016 at 5:19 AM Gulia, Vikram wrote: > Hi Users, I am facing message loss while using kafka v 0.8.2.2. Please see > details below and help me if you can. > > Issue: 2 messages produced to same partition one by one – Kafka producer > retu

Message loss with kafka 0.8.2.2

2016-06-16 Thread Gulia, Vikram
Hi Users, I am facing message loss while using kafka v 0.8.2.2. Please see details below and help me if you can. Issue: 2 messages produced to same partition one by one – Kafka producer returns same offset back which means message produced earlier is lost.<http://stackoverflow.com/questi

Re: Kafka High Level Consumer Message Loss?

2015-07-12 Thread Mayuresh Gharat
ons that consume all messages from one Kafka cluster. > We found that the MessagesPerSec metric started to diverge after some time. > One of them matches the MessagesInPerSec metric from the Kafka broker, > while the other is lower than the broker metric and appears to have some &g

Kafka High Level Consumer Message Loss?

2015-07-10 Thread Allen Wang
message loss. Both of them have the same OwnedPartitionsCount. Both of them have 0 MaxLag. How is that possible? Anything we should look at? Is the MaxLag metric not telling the truth? Thanks, Allen

Re: Message loss due to zookeeper ensemble doesn't work

2015-06-26 Thread noah
and replay them when the brokers recover. On Fri, Jun 26, 2015 at 6:11 AM bit1...@163.com wrote: > Can someone explain this ? Thanks! > > > > bit1...@163.com > > From: bit1...@163.com > Date: 2015-06-25 11:57 > To: users > Subject: Message loss due to zookeeper ense

Re: Message loss due to zookeeper ensemble doesn't work

2015-06-26 Thread bit1...@163.com
Can someone explain this ? Thanks! bit1...@163.com From: bit1...@163.com Date: 2015-06-25 11:57 To: users Subject: Message loss due to zookeeper ensemble doesn't work Hi, I have the offset saved in zookeeper. Because zookeeper quorum doesn't work for a short time(leader is do

Message loss due to zookeeper ensemble doesn't work

2015-06-24 Thread bit1...@163.com
Hi, I have the offset saved in zookeeper. Because zookeeper quorum doesn't work for a short time(leader is down and new leader election).Then there is a chance that the offset doesn't write to the Zookeeper, which will lose data. I would ask whether Kafka provides some mechism for this kind of

RE: Kafka - preventing message loss

2015-04-28 Thread Aditya Auradkar
...@gmail.com] Sent: Tuesday, April 28, 2015 5:07 AM To: users@kafka.apache.org Subject: Kafka - preventing message loss I am trying to setup a cluster where messages should never be lost once it is published. Say if I have 3 brokers, and if I configure the replicas to be 3 also, and if I consider max

Kafka - preventing message loss

2015-04-28 Thread Gomathivinayagam Muthuvinayagam
I am trying to setup a cluster where messages should never be lost once it is published. Say if I have 3 brokers, and if I configure the replicas to be 3 also, and if I consider max failures as 1, and I can achieve the above requirement. But when I post a message, how do I prevent kafka from accept

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-21 Thread Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -)
seem to solve the problem. > Regards, > Jiang > > From: users@kafka.apache.org At: Jul 19 2014 00:06:52 > To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org > Subject: Re: message loss for sync producer, acks=2, topic replicas=3 > > Hi Jiang, > &g

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-21 Thread Guozhang Wang
From: users@kafka.apache.org At: Jul 19 2014 00:06:52 > To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org > Subject: Re: message loss for sync producer, acks=2, topic replicas=3 > > Hi Jiang, > > One thing you can try is to set acks=-1, and set the > r

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-20 Thread Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -)
14 00:06:52 To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org Subject: Re: message loss for sync producer, acks=2, topic replicas=3 Hi Jiang, One thing you can try is to set acks=-1, and set the replica.lag.max.messages properly such that it will not kicks all follower rep

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-18 Thread Guozhang Wang
" together with acks=-1. In this > setting, if all brokers are in sync initially, and only one broker is down > afterwards,then there is no message loss, and producers and consumers will > not be blocked. > > The above is the basic requirment to a fault tolerant system. In more &

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-18 Thread Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -)
re is used as "infinite". we use replica.lag.max.messages="infinite" together with acks=-1. In this setting, if all brokers are in sync initially, and only one broker is down afterwards,then there is no message loss, and producers and consumers will not be blocked. The abov

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-18 Thread Jun Rao
=-1 with replica.lag.max.messages=1. In this > config no message loss was found. > > This is the only config we found to satisfy 1. no message loss and 2. > service keeps available when 1 single broker is down. Are there other > configs that can achieve the same, or stronger con

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-18 Thread Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -)
We tested ack=-1 with replica.lag.max.messages=1. In this config no message loss was found. This is the only config we found to satisfy 1. no message loss and 2. service keeps available when 1 single broker is down. Are there other configs that can achieve the same, or stronger

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-16 Thread Guozhang Wang
closely will be dropped out of ISR more quickly. Guozhang On Wed, Jul 16, 2014 at 5:44 AM, Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -) wrote: > Guozhong, > > So this is the cause of message loss in my test where acks=2 and > replicas=3: > At one moment all 3 replicas, leader L, f

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-16 Thread Jun Rao
only after the message is committed. Thanks, Jun On Wed, Jul 16, 2014 at 5:44 AM, Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -) wrote: > Guozhong, > > So this is the cause of message loss in my test where acks=2 and > replicas=3: > At one moment all 3 replicas, leader L, fol

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-16 Thread Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -)
Guozhong, So this is the cause of message loss in my test where acks=2 and replicas=3: At one moment all 3 replicas, leader L, followers F1 and F2 are in ISR. A publisher sends a message m to L. F1 fetches m. Both L and F1 acknowledge m so the send() is successful. Before F2 fetches m, L is

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-15 Thread Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -)
When ack=-1 and the publisher thread number is high, it always happens that only the leader remains in ISR and shutting down the leader will cause message loss. The leader election code shows that the new leader will be the first alive broker in the ISR list. So it's possible the new l

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-15 Thread Guozhang Wang
ot cause? > Thanks, > Jiang > > From: users@kafka.apache.org At: Jul 15 2014 15:05:25 > To: users@kafka.apache.org > Subject: Re: message loss for sync producer, acks=2, topic replicas=3 > > Guozhang, > > Please find the config below: > > Producer: >

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-15 Thread Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -)
the leader, and loses m1 somehow. Could that be the root cause? Thanks, Jiang From: users@kafka.apache.org At: Jul 15 2014 15:05:25 To: users@kafka.apache.org Subject: Re: message loss for sync producer, acks=2, topic replicas=3 Guozhang, Please find the config below: Producer: props.put

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-15 Thread Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -)
PartitionCount:1ReplicationFactor:3 Configs:retention.bytes=1000000 Thanks, Jiang From: users@kafka.apache.org At: Jul 15 2014 13:59:03 To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org Subject: Re: message loss for sync producer, acks=2, topic replicas=3

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-15 Thread Guozhang Wang
che.org At: Jul 15 2014 13:27:50 > To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org > Subject: Re: message loss for sync producer, acks=2, topic replicas=3 > > Hello Jiang, > > Which version of Kafka are you using, and did you kill the broker with -9? >

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-15 Thread Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -)
Guozhang, I'm testing on 0.8.1.1; just kill pid, no -9. Regards, Jiang From: users@kafka.apache.org At: Jul 15 2014 13:27:50 To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org Subject: Re: message loss for sync producer, acks=2, topic replicas=3 Hello Jiang,

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-15 Thread Guozhang Wang
Hello Jiang, Which version of Kafka are you using, and did you kill the broker with -9? Guozhang On Tue, Jul 15, 2014 at 9:23 AM, Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -) wrote: > Hi, > I observed some unexpected message loss in kafka fault tolerant test. In > the test, a top

message loss for sync producer, acks=2, topic replicas=3

2014-07-15 Thread Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -)
Hi, I observed some unexpected message loss in kafka fault tolerant test. In the test, a topic with 3 replicas is created. A sync producer with acks=2 publishes to the topic. A consumer consumes from the topic and tracks message ids. During the test, the leader is killed. Both producer and

Re: will this cause message loss?

2013-11-14 Thread hsy...@gmail.com
Also if you use HEAD, you can create more partitions at runtime, you just need dynamic partitioner class I think On Thu, Nov 14, 2013 at 7:23 AM, Neha Narkhede wrote: > There is no way to delete topics in Kafka yet. You can add partitions to > existing topics, but you may have to use 0.8 HEAD si

Re: will this cause message loss?

2013-11-14 Thread Neha Narkhede
There is no way to delete topics in Kafka yet. You can add partitions to existing topics, but you may have to use 0.8 HEAD since we have fixed a few bugs on the consumer. You can read about add partitions here https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-5.A

will this cause message loss?

2013-11-14 Thread Yu, Libo
Hi team, We are using beta1. I am going to delete all topics and create them with more partitions. But I don't want to lose any messages. Assume the consumers are online all the time for the following steps. The consumer's auto.offset.reset is set to largest. 1 stop publishing to the brokers.

Re: message loss

2013-08-22 Thread Neha Narkhede
I agree with you. If we include that knob, applications can choose their consistency vs availability tradeoff according to the respective requirements. I will file a JIRA for this. Thanks, Neha On Thu, Aug 22, 2013 at 2:10 PM, Scott Clasen wrote: > +1 for that knob on a per topic basis, choosi

Re: message loss

2013-08-22 Thread Scott Clasen
+1 for that knob on a per topic basis, choosing consistency over availability would open kafka to more use cases no? Sent from my iPhone On Aug 22, 2013, at 1:59 PM, Neha Narkhede wrote: > Scott, > > Kafka replication aims to guarantee that committed writes are not lost. In > other words, as

Re: message loss

2013-08-22 Thread Neha Narkhede
Scott, Kafka replication aims to guarantee that committed writes are not lost. In other words, as long as leader can be transitioned to a broker that was in the ISR, no data will be lost. For increased availability, if there are no other brokers in the ISR, we fall back to electing a broker that i

message loss

2013-08-22 Thread Scott Clasen
So looks like there is a jespen post coming on kafka 0.8 replication, based on this thats circulating on twitter. https://www.refheap.com/17932/raw Understanding that kafka isnt designed particularly to be partition tolerant, the result is not completely surprising. But my question is, is there s

Re: Message loss in kafka when using java API.

2012-11-29 Thread Neha Narkhede
Hi Boris, In Kafka 0.7, the producer does not get any ACK from the server, so the protocol is fire and forget. What could happen is if a broker is shutting down, the messages might still live in the producer's socket buffer and the server can shutdown before getting a chance to read the producer's

Message loss in kafka when using java API.

2012-11-29 Thread boris kogan
Hi, All We encounter an issue with Kafka of messages loss: We have Kafka 0.7.0 installed in cluster mode on three nodes(on partition per node configured). We are using kafka java API to send bulk of messages(1000 messages in each) in sync mode. After the last bulk is sent we close the producer.