Re: Network partition leaves topic-partition leader as sole ISR despite min.isr=2 and producer acks=all settings

2024-08-14 Thread Sabit Nepal
unavailable until the replacement > instance > > came back up and resumed acting as the broker. > > > > However, reviewing our broker and producer settings, I'm not sure why > it's > > possible for the leader to have accepted some writes that were not able &g

Re: Network partition leaves topic-partition leader as sole ISR despite min.isr=2 and producer acks=all settings

2024-08-12 Thread Kamal Chandraprakash
tance > came back up and resumed acting as the broker. > > However, reviewing our broker and producer settings, I'm not sure why it's > possible for the leader to have accepted some writes that were not able to > be replicated to the followers. Our topics use min.insync.replicas

Network partition leaves topic-partition leader as sole ISR despite min.isr=2 and producer acks=all settings

2024-08-11 Thread Sabit Nepal
leader to have accepted some writes that were not able to be replicated to the followers. Our topics use min.insync.replicas=2 and our producers use acks=all configuration. In this scenario, with the changes not being replicated to other followers, I'd expect the records to have failed to be

Re: Sudden performance dip with acks=1

2023-08-23 Thread Prateek Kohli
Update: We can see the same behavior with acks=all as well. After running for sometime, throughput drops a lot. What can I monitor to debug this issue? From: Prateek Kohli Sent: Monday, August 21, 2023 8:05:00 pm To: users@kafka.apache.org Subject: RE: Sudden

RE: Sudden performance dip with acks=1

2023-08-21 Thread Prateek Kohli
Attaching Grafana graphs for reference. Network and I/O threads are more than 60% idle. From: Prateek Kohli Sent: 21 August 2023 19:56 To: users@kafka.apache.org Subject: Sudden performance dip with acks=1 Hi, I am trying to test Kafka performance in my setup using kafka-perf scripts

Sudden performance dip with acks=1

2023-08-21 Thread Prateek Kohli
Hi, I am trying to test Kafka performance in my setup using kafka-perf scripts provided by Kafka. I see a behavior in my Kafka cluster in case of acks=1, which I am unable to understand. My run works as expected for sometime, but after that suddenly "fetcher lag" starts t

Re: Semantics of acks=all

2020-12-11 Thread Stig Rohde Døssing
Thanks. Den fre. 11. dec. 2020 kl. 13.52 skrev Fabio Pardi : > > > On 11/12/2020 13:20, Stig Rohde Døssing wrote: > > Hi, > > > > We have a topic with min.insync.replicas = 2 where each partition is > > replicated to 3 nodes. > > > > When we send

Re: Semantics of acks=all

2020-12-11 Thread Fabio Pardi
On 11/12/2020 13:20, Stig Rohde Døssing wrote: > Hi, > > We have a topic with min.insync.replicas = 2 where each partition is > replicated to 3 nodes. > > When we send a produce request with acks=all, the request should fail if > the records don't make it to at least 2

Semantics of acks=all

2020-12-11 Thread Stig Rohde Døssing
Hi, We have a topic with min.insync.replicas = 2 where each partition is replicated to 3 nodes. When we send a produce request with acks=all, the request should fail if the records don't make it to at least 2 nodes. If the produce request fails, what does the partition leader do wit

i how do use java kafka-client can test acks 0 and 1 and all between difference???

2020-04-04 Thread ????????
i how do use java kafka-client can test acks 0 and 1 and all between difference???

Re: acks this category properties in Which *.java Defined???

2020-04-02 Thread James Olsen
org.apache.kafka.clients.producer.ProducerConfig org.apache.kafka.clients.consumer.ConsumerConfig > On 3/04/2020, at 04:30, 一直以来 <279377...@qq.com> wrote: > > Properties props = new Properties(); props.put("bootstrap.servers", > "localhost:9092");

acks this category properties in Which *.java Defined???

2020-04-02 Thread ????????
Properties props = new Properties(); props.put("bootstrap.servers", "localhost:9092"); props.put("acks", "all"); props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

Re: min.insync.replicas and producer acks

2020-01-25 Thread Pushkar Deole
I mean, the producer acks to be 'none' On Sat, Jan 25, 2020 at 4:49 PM Pushkar Deole wrote: > Thank you for a quick response. > > What would happen if I set the producer acks to be 'one' and > min.insync.replicas to 2. In this case the producer will return

Re: min.insync.replicas and producer acks

2020-01-25 Thread M. Manna
Pushkar, On Sat, 25 Jan 2020 at 11:19, Pushkar Deole wrote: > Thank you for a quick response. > > What would happen if I set the producer acks to be 'one' and > min.insync.replicas to 2. In this case the producer will return when only > leader received the message bu

Re: min.insync.replicas and producer acks

2020-01-25 Thread Pushkar Deole
Thank you for a quick response. What would happen if I set the producer acks to be 'one' and min.insync.replicas to 2. In this case the producer will return when only leader received the message but will not wait for other replicas to receive the message. In this case, how min.insync.r

Re: min.insync.replicas and producer acks

2020-01-24 Thread Boyang Chen
Hey Pushkar, producer ack only has 3 options: none, one, or all. You could not nominate an arbitrary number. On Fri, Jan 24, 2020 at 7:53 PM Pushkar Deole wrote: > Hi All, > > I am a bit confused about min.insync.replicas and producer acks. Are these > two configurations achieve th

min.insync.replicas and producer acks

2020-01-24 Thread Pushkar Deole
Hi All, I am a bit confused about min.insync.replicas and producer acks. Are these two configurations achieve the same thing? e.g. if I set min.insync.replicas to 2, I can also achieve it by setting producer acks to 2 so the producer won't get a ack until 2 replicas received the message?

Re: Producer throughput with varying acks=0,1,-1

2018-11-19 Thread Matthias J. Sax
Nokia - IN/Bangalore) wrote: > But if sends are not done in blocking way (with .get()) how does acks matter ? > > -Original Message- > From: Matthias J. Sax > Sent: Saturday, November 17, 2018 12:15 AM > To: users@kafka.apache.org > Subject: Re: Producer throughput

RE: Producer throughput with varying acks=0,1,-1

2018-11-18 Thread Srinivas, Kaushik (Nokia - IN/Bangalore)
But if sends are not done in blocking way (with .get()) how does acks matter ? -Original Message- From: Matthias J. Sax Sent: Saturday, November 17, 2018 12:15 AM To: users@kafka.apache.org Subject: Re: Producer throughput with varying acks=0,1,-1 I you enable acks, it's not fir

Re: Producer throughput with varying acks=0,1,-1

2018-11-16 Thread Matthias J. Sax
I you enable acks, it's not fire and forget any longer. -Matthias On 11/16/18 1:00 AM, Abhishek Choudhary wrote: > Hi, > > I have been doing some performance tests with kafka cluster for my project. > I have a question regarding the send call and the 'acks' propert

Producer throughput with varying acks=0,1,-1

2018-11-16 Thread Abhishek Choudhary
Hi, I have been doing some performance tests with kafka cluster for my project. I have a question regarding the send call and the 'acks' property of producer. I observed below numbers with below invocation of send call. This is a simple fire and forget call. producer.send(record); The

producing with acks=all (2 replicas) is 100x slower and fails on timeouts with Kafka 1.0.1

2018-05-29 Thread Ofir Manor
Hi all, I'm running into a weird slowness when using acks=all on Kafka 1.0.1. I reproduced it on a 3-node cluster (each 4 cores/14GB RAM), using a topic with replication factor 2. I used the built-in kafka-producer-perf-test.sh tool with 1KB messages. With all defaults, it can send 100K

Producer performance is awful when acks=all

2017-10-27 Thread Vijay Prakash
cer thread, 3x asynchronous replication", I get about 550k records/sec which seems acceptable for the perf loss due to running on Windows. However, when I set acks=all to try synchronous replication, I drop to about 120k records/sec, which is a LOT worse than the numbers in the blog post.

Re: Does kafka send the acks response to the producer after flush the messages to the disk or just keep them in the memory

2017-02-28 Thread Guozhang Wang
(likely in memory) on N partition replicas. > > > > > > Guozhang > > > > On Sun, Feb 26, 2017 at 1:39 AM, Jiecxy <253441...@qq.com> wrote: > > > >> Hi guys, > >> > >>Does kafka send the acks response to the produc

Re: Does kafka send the acks response to the producer after flush the messages to the disk or just keep them in the memory

2017-02-28 Thread Jiecxy
nse of the produce request to producer > after it has been replicated (likely in memory) on N partition replicas. > > > Guozhang > > On Sun, Feb 26, 2017 at 1:39 AM, Jiecxy <253441...@qq.com> wrote: > >> Hi guys, >> >>Does kafka send the acks re

Re: Does kafka send the acks response to the producer after flush the messages to the disk or just keep them in the memory

2017-02-26 Thread Guozhang Wang
com> wrote: > Hi guys, > > Does kafka send the acks response to the producer after flush the > messages to the disk or just keep them in the memory? > How does Kafka flush the messages? By calling the system call, like > fsync()? > > Thanks > Chen > > -- -- Guozhang

Does kafka send the acks response to the producer after flush the messages to the disk or just keep them in the memory

2017-02-26 Thread Jiecxy
Hi guys, Does kafka send the acks response to the producer after flush the messages to the disk or just keep them in the memory? How does Kafka flush the messages? By calling the system call, like fsync()? Thanks Chen

Producer acks=1, clean broker shutdown and data loss

2017-02-18 Thread Nick Travers
Hi - I'm trying to understand the expected behavior of the scenario in which I have a producer with `acks=1` (i.e. partition leader acks only) and I cleanly shut down a broker (via `KafkaServer#shutdown`). I am running my test scenario with three brokers (0.10.1.1), with a default replic

Re: KAFKA-3703: Graceful close for consumers and producer with acks=0

2017-02-02 Thread Manikumar
It is fixed on trunk and will be part of upcoming 0.10.2.0 release. On Fri, Feb 3, 2017 at 10:58 AM, Pascu, Ciprian (Nokia - FI/Espoo) < ciprian.pa...@nokia.com> wrote: > Hi, > > Can anyone tell me in which release this fix will be present? > > > https://github.com/apache/kafka/pull/1836 > > > I

KAFKA-3703: Graceful close for consumers and producer with acks=0

2017-02-02 Thread Pascu, Ciprian (Nokia - FI/Espoo)
Hi, Can anyone tell me in which release this fix will be present? https://github.com/apache/kafka/pull/1836 It is not present in the current release (0.10.1.1), which I don't quite understand, because it has been committed in November last year to the trunk. To which branch the 0.10.1.1 tag

Re: __consumer_offsets topic acks

2016-12-17 Thread Ewen Cheslack-Postava
(either on a per-topic basis or globally if that's an acceptable possible availability tradeoff for you). -Ewen On Fri, Dec 16, 2016 at 6:15 PM, Fang Wong wrote: > Hi, > > What is the value of acks set for kafka internal topic __consumer_offsets? > I know the default repli

__consumer_offsets topic acks

2016-12-16 Thread Fang Wong
Hi, What is the value of acks set for kafka internal topic __consumer_offsets? I know the default replication factor for __consumer_offsets is 3, and we are using version 0.9.01, and set min.sync.replicas = 2 in our server.properties. We noticed some partitions of __consumer_offsets has ISR with

__consumer_offsets topic acks

2016-12-16 Thread Fang Wong
Hi, What is the value of acks set for kafka internal topic __consumer_offsets? I know the default replication factor for __consumer_offsets is 3, and we are using version 0.9.01, and set min.sync.replicas = 2 in our server.properties. We noticed some partitions of __consumer_offsets has ISR with

Re: Producer Acks All vs -1

2016-07-18 Thread Dustin Cote
, 2016 at 3:38 PM, Malcolm, Brian (Centers of Excellence - Integration) wrote: > I am using version 0.10.0 of Kafka and the documentation syas the Producer > acks can have the value can be [all, -1, 0, 1]. > What is the difference between the all and -1 setting? > > > > -- Dustin Cote confluent.io

Producer Acks All vs -1

2016-07-18 Thread Malcolm, Brian (Centers of Excellence - Integration)
I am using version 0.10.0 of Kafka and the documentation syas the Producer acks can have the value can be [all, -1, 0, 1]. What is the difference between the all and -1 setting?

Re: acks

2016-01-20 Thread Dana Powers
Hi Fang, take a look at the docs on KIP-1 for some background info on acks policy: https://cwiki.apache.org/confluence/display/KAFKA/KIP-1+-+Remove+support+of+request.required.acks -Dana On Wed, Jan 20, 2016 at 3:50 PM, Fang Wong wrote: > We are using kafka 0.8.2.1 and set acks to 2, see

acks

2016-01-20 Thread Fang Wong
We are using kafka 0.8.2.1 and set acks to 2, see the following warning: sent a produce request with request.required.acks of 2, which is now deprecated and will be removed in next release. Valid values are -1, 0 or 1. Please consult Kafka documentation for supported and recommended configuration

Re: What is the benefit of using acks=all and minover e.g. acks=3

2015-12-01 Thread Andreas Flinck
"num.replica.fetchers": (1) "replica.fetch.wait.max.ms<http://replica.fetch.wait.max.ms/>": (500), "num.recovery.threads.per.data.dir": (1) The producer properties we explicitly set are the following; block.on.buffer.full=false client.id<http://client.id/>=MZ max.request

Re: SV: What is the benefit of using acks=all and minover e.g. acks=3

2015-11-28 Thread Prabhjot Bharaj
hin parenthesis): > > "num.replica.fetchers": (1) > "replica.fetch.wait.max.ms": (500), > "num.recovery.threads.per.data.dir": (1) > > The producer properties we explicitly set are the following; > > block.on.buffer.full=false > client.id=MZ > max.request.size=104857

SV: What is the benefit of using acks=all and minover e.g. acks=3

2015-11-28 Thread Andreas Flinck
ation (within parenthesis): "num.replica.fetchers": (1) "replica.fetch.wait.max.ms": (500), "num.recovery.threads.per.data.dir": (1) The producer properties we explicitly set are the following; block.on.buffer.full=false client.id=MZ max.request.size=1048576 acks=all retri

Re: What is the benefit of using acks=all and minover e.g. acks=3

2015-11-28 Thread Prabhjot Bharaj
your cluster. Thanks, Prabhjot On Sat, Nov 28, 2015 at 3:54 PM, Andreas Flinck < andreas.fli...@digitalroute.com> wrote: > Great, thanks for the information! So it is definitely acks=all we want to > go for. Unfortunately we run into an blocking issue in our production like >

Re: What is the benefit of using acks=all and minover e.g. acks=3

2015-11-28 Thread Andreas Flinck
Great, thanks for the information! So it is definitely acks=all we want to go for. Unfortunately we run into an blocking issue in our production like test environment which we have not been able to find a solution for. So here it is, ANY idea on how we could possibly find a solution is very

Re: What is the benefit of using acks=all and minover e.g. acks=3

2015-11-28 Thread Prabhjot Bharaj
; > Hi Gwen, > > > > How about min.isr.replicas property? > > Is it still valid in the new version 0.9 ? > > > > We could get 3 out of 4 replicas in sync if we set it's value to 3. > > Correct? > > > > Thanks, > > Prabhjot > > On

Re: What is the benefit of using acks=all and minover e.g. acks=3

2015-11-27 Thread Gwen Shapira
; Thanks, > Prabhjot > On Nov 28, 2015 10:20 AM, "Gwen Shapira" wrote: > > > In your scenario, you are receiving acks from 3 replicas while it is > > possible to have 4 in the ISR. This means that one replica can be up to > > 4000 messages (by default) behin

Re: What is the benefit of using acks=all and minover e.g. acks=3

2015-11-27 Thread Prabhjot Bharaj
Hi Gwen, How about min.isr.replicas property? Is it still valid in the new version 0.9 ? We could get 3 out of 4 replicas in sync if we set it's value to 3. Correct? Thanks, Prabhjot On Nov 28, 2015 10:20 AM, "Gwen Shapira" wrote: > In your scenario, you are receiving ac

Re: What is the benefit of using acks=all and minover e.g. acks=3

2015-11-27 Thread Gwen Shapira
In your scenario, you are receiving acks from 3 replicas while it is possible to have 4 in the ISR. This means that one replica can be up to 4000 messages (by default) behind others. If a leader crashes, there is 33% chance this replica will become the new leader, thereby losing up to 4000

What is the benefit of using acks=all and minover e.g. acks=3

2015-11-27 Thread Andreas Flinck
Hi all The reason why I need to know is that we have seen an issue when using acks=all, forcing us to quickly find an alternative. I leave the issue out of this post, but will probably come back to that! My question is about acks=all and min.insync.replicas property. Since we have found a

Re: High delay during controlled shutdown and acks=-1

2015-11-02 Thread Becket Qin
Hi Federico, What is your replica.lag.time.max.ms? When acks=-1, the ProducerResponse won't return until all the broker in ISR get the message. During controlled shutdown, the shutting down broker is doing a lot of leader migration and could slow down on fetching data. The broker won't

High delay during controlled shutdown and acks=-1

2015-11-02 Thread Federico Giraud
Hi, I have few java async producers sending data to a 4-node Kafka cluster version 0.8.2, containing few thousand topics, all with replication factor 2. When i use acks=1 and trigger a controlled shutdown + restart on one broker, the producers will send data to the new leader, reporting a very

Re: kafka-producer-perf-test.sh - No visible difference between request-num-acks 1 and -1

2015-08-21 Thread Tao Feng
2 Leader: 1 Replicas: 1,3,4 Isr: 4,1,3 > > Topic: tops1 Partition: 3 Leader: 2 Replicas: 2,4,5 Isr: 4,2,5 > > > This is the output of the kafka-producer-perf-test.sh for request-num-acks > 1 and request-num-acks -1:- > > root@x.x.x.x:~# date;time kafka-producer-perf-test.s

kafka-producer-perf-test.sh - No visible difference between request-num-acks 1 and -1

2015-08-21 Thread Prabhjot Bharaj
5,2,3 Isr: 5,3,2 Topic: tops1 Partition: 2 Leader: 1 Replicas: 1,3,4 Isr: 4,1,3 Topic: tops1 Partition: 3 Leader: 2 Replicas: 2,4,5 Isr: 4,2,5 This is the output of the kafka-producer-perf-test.sh for request-num-acks 1 and request-num-acks -1:- root@x.x.x.x:~# date;time kafka-producer-pe

Re: Kafka New Producer setting acks=2 in 0.8.2.1

2015-05-14 Thread pushkar priyadarshi
nsult Kafka > > documentation for supported and recommended configuration > > > > I have a particular use case where i want replication to be acknowledged > by > > exactly (replicationFactor -1 ) broker or message publish should fail if > > that many Acks are not possible. > > > > regards > > > > > > -- > -- Guozhang >

Re: Kafka New Producer setting acks=2 in 0.8.2.1

2015-05-14 Thread Guozhang Wang
l be removed in > next release. Valid values are -1, 0 or 1. Please consult Kafka > documentation for supported and recommended configuration > > I have a particular use case where i want replication to be acknowledged by > exactly (replicationFactor -1 ) broker or message publish sho

Kafka New Producer setting acks=2 in 0.8.2.1

2015-05-14 Thread pushkar priyadarshi
many Acks are not possible. regards

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-21 Thread Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -)
seem to solve the problem. > Regards, > Jiang > > From: users@kafka.apache.org At: Jul 19 2014 00:06:52 > To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org > Subject: Re: message loss for sync producer, acks=2, topic replicas=3 > > Hi Jiang, > &g

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-21 Thread Guozhang Wang
From: users@kafka.apache.org At: Jul 19 2014 00:06:52 > To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org > Subject: Re: message loss for sync producer, acks=2, topic replicas=3 > > Hi Jiang, > > One thing you can try is to set acks=-1, and set the > r

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-20 Thread Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -)
14 00:06:52 To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org Subject: Re: message loss for sync producer, acks=2, topic replicas=3 Hi Jiang, One thing you can try is to set acks=-1, and set the replica.lag.max.messages properly such that it will not kicks all follower rep

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-18 Thread Guozhang Wang
Hi Jiang, One thing you can try is to set acks=-1, and set the replica.lag.max.messages properly such that it will not kicks all follower replicas immediately under your produce load. Then if one of the follower replica is lagging and the other is not, this one will be dropped out of ISR and when

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-18 Thread Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -)
re is used as "infinite". we use replica.lag.max.messages="infinite" together with acks=-1. In this setting, if all brokers are in sync initially, and only one broker is down afterwards,then there is no message loss, and producers and consumers will not be blocked. The abov

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-18 Thread Jun Rao
ISR more quickly. > > Guozhang > > > On Wed, Jul 16, 2014 at 5:44 AM, Jiang Wu (Pricehistory) (BLOOMBERG/ 731 > LEX -) wrote: > > > Guozhong, > > > > So this is the cause of message loss in my test where acks=2 and > > replicas=3: > > At one moment

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-18 Thread Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -)
AM, Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -) wrote: > Guozhong, > > So this is the cause of message loss in my test where acks=2 and > replicas=3: > At one moment all 3 replicas, leader L, followers F1 and F2 are in ISR. A > publisher sends a message m to L. F1 fetche

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-16 Thread Guozhang Wang
closely will be dropped out of ISR more quickly. Guozhang On Wed, Jul 16, 2014 at 5:44 AM, Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -) wrote: > Guozhong, > > So this is the cause of message loss in my test where acks=2 and > replicas=3: > At one moment all 3 replicas, leader L, f

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-16 Thread Jun Rao
only after the message is committed. Thanks, Jun On Wed, Jul 16, 2014 at 5:44 AM, Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -) wrote: > Guozhong, > > So this is the cause of message loss in my test where acks=2 and > replicas=3: > At one moment all 3 replicas, leader L, fol

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-16 Thread Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -)
Guozhong, So this is the cause of message loss in my test where acks=2 and replicas=3: At one moment all 3 replicas, leader L, followers F1 and F2 are in ISR. A publisher sends a message m to L. F1 fetches m. Both L and F1 acknowledge m so the send() is successful. Before F2 fetches m, L is

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-15 Thread Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -)
-), users@kafka.apache.org At: Jul 15 2014 16:11:17 That could be the cause, and it can be verified by changing the acks to -1 and checking the data loss ratio then. Guozhang On Tue, Jul 15, 2014 at 12:49 PM, Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -) wrote: > Guozhang,My coworker ca

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-15 Thread Guozhang Wang
That could be the cause, and it can be verified by changing the acks to -1 and checking the data loss ratio then. Guozhang On Tue, Jul 15, 2014 at 12:49 PM, Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -) wrote: > Guozhang,My coworker came up with an explaination: at one moment the > le

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-15 Thread Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -)
Guozhang,My coworker came up with an explaination: at one moment the leader L, and two followers F1, F2 are all in ISR. The producer sends a message m1 and receives acks from L and F1. Before the messge is replicated to F2, L is down. In the following leader election, F2, instead of F1, becomes

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-15 Thread Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -)
PartitionCount:1ReplicationFactor:3 Configs:retention.bytes=100 Thanks, Jiang From: users@kafka.apache.org At: Jul 15 2014 13:59:03 To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org Subject: Re: message loss for sync producer, acks=2, topic replicas=3

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-15 Thread Guozhang Wang
che.org At: Jul 15 2014 13:27:50 > To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org > Subject: Re: message loss for sync producer, acks=2, topic replicas=3 > > Hello Jiang, > > Which version of Kafka are you using, and did you kill the broker with -9? >

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-15 Thread Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -)
Guozhang, I'm testing on 0.8.1.1; just kill pid, no -9. Regards, Jiang From: users@kafka.apache.org At: Jul 15 2014 13:27:50 To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org Subject: Re: message loss for sync producer, acks=2, topic replicas=3 Hello Jiang,

Re: message loss for sync producer, acks=2, topic replicas=3

2014-07-15 Thread Guozhang Wang
ic with 3 replicas is created. A sync producer with acks=2 > publishes to the topic. A consumer consumes from the topic and tracks > message ids. During the test, the leader is killed. Both producer and > consumer continue to run for a while. After the producer stops, the > consumer re

message loss for sync producer, acks=2, topic replicas=3

2014-07-15 Thread Jiang Wu (Pricehistory) (BLOOMBERG/ 731 LEX -)
Hi, I observed some unexpected message loss in kafka fault tolerant test. In the test, a topic with 3 replicas is created. A sync producer with acks=2 publishes to the topic. A consumer consumes from the topic and tracks message ids. During the test, the leader is killed. Both producer and

Re: Sync producers stuck waiting for 2 acks

2014-02-12 Thread Joel Koshy
ltiple partitions and replication factor 2 > - run producer performance script on 4 VMs in a sync mode with 2 acks to send > 1M messages > > > -Original Message- > From: Joel Koshy [mailto:jjkosh...@gmail.com] > Sent: Wednesday, February 12, 2014 3:40 PM > To: users@kaf

RE: Sync producers stuck waiting for 2 acks

2014-02-12 Thread Michael Popov
Thanks Joel! I found this configuration setting in "Producer Configs". I guess it means each producer sets this parameter as part of connection settings, like a number of acks. I checked the information in Zookeeper and found out that 2 of the brokers are missing. The VMs with the

Re: Sync producers stuck waiting for 2 acks

2014-02-12 Thread Joel Koshy
0.8. When I configure sync producers > to expect 2 acks for each "write" request, some of the producers get stuck. > It looks like broker's response is not delivered back. > This happened with original Kafka performance tools and with a test tool > built using a custom

Sync producers stuck waiting for 2 acks

2014-02-12 Thread Michael Popov
I am running a test deployment of Kafka 0.8. When I configure sync producers to expect 2 acks for each "write" request, some of the producers get stuck. It looks like broker's response is not delivered back. This happened with original Kafka performance tools and with a test tool

Re: Kafka performance test: "--request-num-acks -1" kills throughput

2014-02-04 Thread Jun Rao
gt; From: Jun Rao [mailto:jun...@gmail.com] > Sent: Monday, February 3, 2014 9:11 PM > To: users@kafka.apache.org > Subject: Re: Kafka performance test: "--request-num-acks -1" kills > throughput > > Michael, > > Your understanding is mostly correct. For 2, the fol

RE: Kafka performance test: "--request-num-acks -1" kills throughput

2014-02-04 Thread Michael Popov
users@kafka.apache.org Subject: Re: Kafka performance test: "--request-num-acks -1" kills throughput Michael, Your understanding is mostly correct. For 2, the follower will issue another fetch request as soon as it finishes processing the response of the previous fetch (by adding data, if any,

Re: Kafka performance test: "--request-num-acks -1" kills throughput

2014-02-03 Thread Jun Rao
+ > replica.fetch.wait.max.ms | throughput for acks=-1| > throughput for acks=1| > > -++--+ >50 > 1311

RE: Kafka performance test: "--request-num-acks -1" kills throughput

2014-02-03 Thread Michael Popov
. -++--+ replica.fetch.wait.max.ms | throughput for acks=-1|throughput for acks=1| -++--+ 50 1311

Re: Kafka performance test: "--request-num-acks -1" kills throughput

2014-02-03 Thread Jun Rao
t; replica.lag.time.max.ms=1 > replica.lag.max.messages=4000 > > In most cases the tests were executed with "out-of-box" settings, which > don't change "replica" configuration. > > We are running these tests on very weak machines. If absolute throughput &

RE: Kafka performance test: "--request-num-acks -1" kills throughput

2014-02-03 Thread Michael Popov
. We are running these tests on very weak machines. If absolute throughput numbers are not as high as in other people's tests, that's understandable. The main concern is why throughput drops 4-10 times when a number of expected acks is not 1. Should we wait for newer versions of Ka

Re: Kafka performance test: "--request-num-acks -1" kills throughput

2014-01-31 Thread Jun Rao
--message-size 1024 > --request-num-acks -1 --sync --messages 10 -threads 1 > --show-detailed-stats --reporting-interval 1000 --topics d2111 | grep -v > "at " > > I assume producer uses default timeout of 3000ms in my tests. > > I ran a few data processing ope

RE: Kafka performance test: "--request-num-acks -1" kills throughput

2014-01-31 Thread Michael Popov
/kafka-producer-perf-test.sh --broker-list 10.0.0.8:9092,10.0.0.10:9092 --compression-codec 0 --message-size 1024 --request-num-acks -1 --sync --messages 10 -threads 1 --show-detailed-stats --reporting-interval 1000 --topics d2111 | grep -v "at " I assume producer uses default timeout

Re: Kafka performance test: "--request-num-acks -1" kills throughput

2014-01-31 Thread Jun Rao
bin/kafka-producer-perf-test.sh --broker-list 10.0.0.8:9092, > 10.0.0.10:9092 --compression-codec 0 --message-size 1024 > --request-num-acks 1 --sync --messages 10 -threads 10 > --show-detailed-stats --reporting-interval 1000 --topics d1 | grep -v "at " > > Results of 4

RE: Kafka performance test: "--request-num-acks -1" kills throughput

2014-01-30 Thread Michael Popov
ucers even with global locks would not interfere with each other. Changing a single configuration parameter, a number of required acks, consistently reduced system throughput in all tests. And this drop of system throughput is too big to ignore. Is there a global lock on the server side that contro

RE: Kafka performance test: "--request-num-acks -1" kills throughput

2014-01-30 Thread Michael Popov
--topic d1 Commands to run a test producer looked like this: bin/kafka-producer-perf-test.sh --broker-list 10.0.0.8:9092,10.0.0.10:9092 --compression-codec 0 --message-size 1024 --request-num-acks 1 --sync --messages 10 -threads 10 --show-detailed-stats --reporting-interval 1000

Re: Kafka performance test: "--request-num-acks -1" kills throughput

2014-01-29 Thread Jun Rao
multiple platforms: Linux and Windows. > For test purposes I create topics with 2 replicas and multiple partitions. > In all deployments running test producers that wait for both replicas' acks > practically kills Kafka throughput. For example, on the following > deployment on Linux mac

Re: Kafka performance test: "--request-num-acks -1" kills throughput

2014-01-29 Thread Neha Narkhede
that can scale. Kafka looks > like a right system for this role. > > I am running performance tests on multiple platforms: Linux and Windows. > For test purposes I create topics with 2 replicas and multiple partitions. > In all deployments running test producers that wait for both repli

Kafka performance test: "--request-num-acks -1" kills throughput

2014-01-29 Thread Michael Popov
producers that wait for both replicas' acks practically kills Kafka throughput. For example, on the following deployment on Linux machines: 2 Kafka brokers, 1 Zookeeper node, 4 client hosts to create load, 4 topics with 10 partitions each and 2 replicas - running tests with "