unavailable until the replacement
> instance
> > came back up and resumed acting as the broker.
> >
> > However, reviewing our broker and producer settings, I'm not sure why
> it's
> > possible for the leader to have accepted some writes that were not able
&g
tance
> came back up and resumed acting as the broker.
>
> However, reviewing our broker and producer settings, I'm not sure why it's
> possible for the leader to have accepted some writes that were not able to
> be replicated to the followers. Our topics use min.insync.replicas
leader to have accepted some writes that were not able to
be replicated to the followers. Our topics use min.insync.replicas=2 and
our producers use acks=all configuration. In this scenario, with the
changes not being replicated to other followers, I'd expect the records to
have failed to be
Update: We can see the same behavior with acks=all as well. After running for
sometime, throughput drops a lot.
What can I monitor to debug this issue?
From: Prateek Kohli
Sent: Monday, August 21, 2023 8:05:00 pm
To: users@kafka.apache.org
Subject: RE: Sudden
Attaching Grafana graphs for reference.
Network and I/O threads are more than 60% idle.
From: Prateek Kohli
Sent: 21 August 2023 19:56
To: users@kafka.apache.org
Subject: Sudden performance dip with acks=1
Hi,
I am trying to test Kafka performance in my setup using kafka-perf scripts
Hi,
I am trying to test Kafka performance in my setup using kafka-perf scripts
provided by Kafka.
I see a behavior in my Kafka cluster in case of acks=1, which I am unable to
understand.
My run works as expected for sometime, but after that suddenly "fetcher lag"
starts t
Thanks.
Den fre. 11. dec. 2020 kl. 13.52 skrev Fabio Pardi :
>
>
> On 11/12/2020 13:20, Stig Rohde Døssing wrote:
> > Hi,
> >
> > We have a topic with min.insync.replicas = 2 where each partition is
> > replicated to 3 nodes.
> >
> > When we send
On 11/12/2020 13:20, Stig Rohde Døssing wrote:
> Hi,
>
> We have a topic with min.insync.replicas = 2 where each partition is
> replicated to 3 nodes.
>
> When we send a produce request with acks=all, the request should fail if
> the records don't make it to at least 2
Hi,
We have a topic with min.insync.replicas = 2 where each partition is
replicated to 3 nodes.
When we send a produce request with acks=all, the request should fail if
the records don't make it to at least 2 nodes.
If the produce request fails, what does the partition leader do wit
i how do use java kafka-client can test acks 0 and 1 and all between
difference???
org.apache.kafka.clients.producer.ProducerConfig
org.apache.kafka.clients.consumer.ConsumerConfig
> On 3/04/2020, at 04:30, 一直以来 <279377...@qq.com> wrote:
>
> Properties props = new Properties(); props.put("bootstrap.servers",
> "localhost:9092");
Properties props = new Properties(); props.put("bootstrap.servers",
"localhost:9092"); props.put("acks", "all"); props.put("key.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
I mean, the producer acks to be 'none'
On Sat, Jan 25, 2020 at 4:49 PM Pushkar Deole wrote:
> Thank you for a quick response.
>
> What would happen if I set the producer acks to be 'one' and
> min.insync.replicas to 2. In this case the producer will return
Pushkar,
On Sat, 25 Jan 2020 at 11:19, Pushkar Deole wrote:
> Thank you for a quick response.
>
> What would happen if I set the producer acks to be 'one' and
> min.insync.replicas to 2. In this case the producer will return when only
> leader received the message bu
Thank you for a quick response.
What would happen if I set the producer acks to be 'one' and
min.insync.replicas to 2. In this case the producer will return when only
leader received the message but will not wait for other replicas to receive
the message. In this case, how min.insync.r
Hey Pushkar,
producer ack only has 3 options: none, one, or all. You could not nominate
an arbitrary number.
On Fri, Jan 24, 2020 at 7:53 PM Pushkar Deole wrote:
> Hi All,
>
> I am a bit confused about min.insync.replicas and producer acks. Are these
> two configurations achieve th
Hi All,
I am a bit confused about min.insync.replicas and producer acks. Are these
two configurations achieve the same thing? e.g. if I set
min.insync.replicas to 2, I can also achieve it by setting producer acks to
2 so the producer won't get a ack until 2 replicas received the message?
Nokia - IN/Bangalore) wrote:
> But if sends are not done in blocking way (with .get()) how does acks matter ?
>
> -Original Message-
> From: Matthias J. Sax
> Sent: Saturday, November 17, 2018 12:15 AM
> To: users@kafka.apache.org
> Subject: Re: Producer throughput
But if sends are not done in blocking way (with .get()) how does acks matter ?
-Original Message-
From: Matthias J. Sax
Sent: Saturday, November 17, 2018 12:15 AM
To: users@kafka.apache.org
Subject: Re: Producer throughput with varying acks=0,1,-1
I you enable acks, it's not fir
I you enable acks, it's not fire and forget any longer.
-Matthias
On 11/16/18 1:00 AM, Abhishek Choudhary wrote:
> Hi,
>
> I have been doing some performance tests with kafka cluster for my project.
> I have a question regarding the send call and the 'acks' propert
Hi,
I have been doing some performance tests with kafka cluster for my project.
I have a question regarding the send call and the 'acks' property of
producer. I observed below numbers with below invocation of send call. This
is a simple fire and forget call.
producer.send(record);
The
Hi all,
I'm running into a weird slowness when using acks=all on Kafka 1.0.1.
I reproduced it on a 3-node cluster (each 4 cores/14GB RAM), using a topic
with replication factor 2.
I used the built-in kafka-producer-perf-test.sh tool with 1KB messages.
With all defaults, it can send 100K
cer thread, 3x asynchronous replication", I get about
550k records/sec which seems acceptable for the perf loss due to running on
Windows. However, when I set acks=all to try synchronous replication, I drop to
about 120k records/sec, which is a LOT worse than the numbers in the blog post.
(likely in memory) on N partition replicas.
> >
> >
> > Guozhang
> >
> > On Sun, Feb 26, 2017 at 1:39 AM, Jiecxy <253441...@qq.com> wrote:
> >
> >> Hi guys,
> >>
> >>Does kafka send the acks response to the produc
nse of the produce request to producer
> after it has been replicated (likely in memory) on N partition replicas.
>
>
> Guozhang
>
> On Sun, Feb 26, 2017 at 1:39 AM, Jiecxy <253441...@qq.com> wrote:
>
>> Hi guys,
>>
>>Does kafka send the acks re
com> wrote:
> Hi guys,
>
> Does kafka send the acks response to the producer after flush the
> messages to the disk or just keep them in the memory?
> How does Kafka flush the messages? By calling the system call, like
> fsync()?
>
> Thanks
> Chen
>
>
--
-- Guozhang
Hi guys,
Does kafka send the acks response to the producer after flush the messages
to the disk or just keep them in the memory?
How does Kafka flush the messages? By calling the system call, like fsync()?
Thanks
Chen
Hi - I'm trying to understand the expected behavior of the scenario in
which I have a producer with `acks=1` (i.e. partition leader acks only) and
I cleanly shut down a broker (via `KafkaServer#shutdown`).
I am running my test scenario with three brokers (0.10.1.1), with a default
replic
It is fixed on trunk and will be part of upcoming 0.10.2.0 release.
On Fri, Feb 3, 2017 at 10:58 AM, Pascu, Ciprian (Nokia - FI/Espoo) <
ciprian.pa...@nokia.com> wrote:
> Hi,
>
> Can anyone tell me in which release this fix will be present?
>
>
> https://github.com/apache/kafka/pull/1836
>
>
> I
Hi,
Can anyone tell me in which release this fix will be present?
https://github.com/apache/kafka/pull/1836
It is not present in the current release (0.10.1.1), which I don't quite
understand, because it has been committed in November last year to the trunk.
To which branch the 0.10.1.1 tag
(either on a per-topic basis or globally
if that's an acceptable possible availability tradeoff for you).
-Ewen
On Fri, Dec 16, 2016 at 6:15 PM, Fang Wong wrote:
> Hi,
>
> What is the value of acks set for kafka internal topic __consumer_offsets?
> I know the default repli
Hi,
What is the value of acks set for kafka internal topic __consumer_offsets?
I know the default replication factor for __consumer_offsets is 3, and
we are using version 0.9.01, and set min.sync.replicas = 2 in our
server.properties.
We noticed some partitions of __consumer_offsets has ISR with
Hi,
What is the value of acks set for kafka internal topic __consumer_offsets?
I know the default replication factor for __consumer_offsets is 3, and we
are using version 0.9.01, and set min.sync.replicas = 2 in our
server.properties.
We noticed some partitions of __consumer_offsets has ISR with
, 2016 at 3:38 PM, Malcolm, Brian (Centers of Excellence -
Integration) wrote:
> I am using version 0.10.0 of Kafka and the documentation syas the Producer
> acks can have the value can be [all, -1, 0, 1].
> What is the difference between the all and -1 setting?
>
>
>
>
--
Dustin Cote
confluent.io
I am using version 0.10.0 of Kafka and the documentation syas the Producer acks
can have the value can be [all, -1, 0, 1].
What is the difference between the all and -1 setting?
Hi Fang, take a look at the docs on KIP-1 for some background info on acks
policy:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-1+-+Remove+support+of+request.required.acks
-Dana
On Wed, Jan 20, 2016 at 3:50 PM, Fang Wong wrote:
> We are using kafka 0.8.2.1 and set acks to 2, see
We are using kafka 0.8.2.1 and set acks to 2, see the following warning:
sent a produce request with request.required.acks of 2, which is now
deprecated and will be removed in next release. Valid values are -1, 0 or
1. Please consult Kafka documentation for supported and recommended
configuration
"num.replica.fetchers": (1)
"replica.fetch.wait.max.ms<http://replica.fetch.wait.max.ms/>": (500),
"num.recovery.threads.per.data.dir": (1)
The producer properties we explicitly set are the following;
block.on.buffer.full=false
client.id<http://client.id/>=MZ
max.request
hin parenthesis):
>
> "num.replica.fetchers": (1)
> "replica.fetch.wait.max.ms": (500),
> "num.recovery.threads.per.data.dir": (1)
>
> The producer properties we explicitly set are the following;
>
> block.on.buffer.full=false
> client.id=MZ
> max.request.size=104857
ation (within parenthesis):
"num.replica.fetchers": (1)
"replica.fetch.wait.max.ms": (500),
"num.recovery.threads.per.data.dir": (1)
The producer properties we explicitly set are the following;
block.on.buffer.full=false
client.id=MZ
max.request.size=1048576
acks=all
retri
your cluster.
Thanks,
Prabhjot
On Sat, Nov 28, 2015 at 3:54 PM, Andreas Flinck <
andreas.fli...@digitalroute.com> wrote:
> Great, thanks for the information! So it is definitely acks=all we want to
> go for. Unfortunately we run into an blocking issue in our production like
>
Great, thanks for the information! So it is definitely acks=all we want to go
for. Unfortunately we run into an blocking issue in our production like test
environment which we have not been able to find a solution for. So here it is,
ANY idea on how we could possibly find a solution is very
; > Hi Gwen,
> >
> > How about min.isr.replicas property?
> > Is it still valid in the new version 0.9 ?
> >
> > We could get 3 out of 4 replicas in sync if we set it's value to 3.
> > Correct?
> >
> > Thanks,
> > Prabhjot
> > On
; Thanks,
> Prabhjot
> On Nov 28, 2015 10:20 AM, "Gwen Shapira" wrote:
>
> > In your scenario, you are receiving acks from 3 replicas while it is
> > possible to have 4 in the ISR. This means that one replica can be up to
> > 4000 messages (by default) behin
Hi Gwen,
How about min.isr.replicas property?
Is it still valid in the new version 0.9 ?
We could get 3 out of 4 replicas in sync if we set it's value to 3. Correct?
Thanks,
Prabhjot
On Nov 28, 2015 10:20 AM, "Gwen Shapira" wrote:
> In your scenario, you are receiving ac
In your scenario, you are receiving acks from 3 replicas while it is
possible to have 4 in the ISR. This means that one replica can be up to
4000 messages (by default) behind others. If a leader crashes, there is 33%
chance this replica will become the new leader, thereby losing up to 4000
Hi all
The reason why I need to know is that we have seen an issue when using
acks=all, forcing us to quickly find an alternative. I leave the issue out of
this post, but will probably come back to that!
My question is about acks=all and min.insync.replicas property. Since we have
found a
Hi Federico,
What is your replica.lag.time.max.ms?
When acks=-1, the ProducerResponse won't return until all the broker in ISR
get the message. During controlled shutdown, the shutting down broker is
doing a lot of leader migration and could slow down on fetching data. The
broker won't
Hi,
I have few java async producers sending data to a 4-node Kafka cluster
version 0.8.2, containing few thousand topics, all with replication factor
2. When i use acks=1 and trigger a controlled shutdown + restart on one
broker, the producers will send data to the new leader, reporting a very
2 Leader: 1 Replicas: 1,3,4 Isr: 4,1,3
>
> Topic: tops1 Partition: 3 Leader: 2 Replicas: 2,4,5 Isr: 4,2,5
>
>
> This is the output of the kafka-producer-perf-test.sh for request-num-acks
> 1 and request-num-acks -1:-
>
> root@x.x.x.x:~# date;time kafka-producer-perf-test.s
5,2,3 Isr: 5,3,2
Topic: tops1 Partition: 2 Leader: 1 Replicas: 1,3,4 Isr: 4,1,3
Topic: tops1 Partition: 3 Leader: 2 Replicas: 2,4,5 Isr: 4,2,5
This is the output of the kafka-producer-perf-test.sh for request-num-acks
1 and request-num-acks -1:-
root@x.x.x.x:~# date;time kafka-producer-pe
nsult Kafka
> > documentation for supported and recommended configuration
> >
> > I have a particular use case where i want replication to be acknowledged
> by
> > exactly (replicationFactor -1 ) broker or message publish should fail if
> > that many Acks are not possible.
> >
> > regards
> >
>
>
>
> --
> -- Guozhang
>
l be removed in
> next release. Valid values are -1, 0 or 1. Please consult Kafka
> documentation for supported and recommended configuration
>
> I have a particular use case where i want replication to be acknowledged by
> exactly (replicationFactor -1 ) broker or message publish sho
many Acks are not possible.
regards
seem to solve the problem.
> Regards,
> Jiang
>
> From: users@kafka.apache.org At: Jul 19 2014 00:06:52
> To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org
> Subject: Re: message loss for sync producer, acks=2, topic replicas=3
>
> Hi Jiang,
>
&g
From: users@kafka.apache.org At: Jul 19 2014 00:06:52
> To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org
> Subject: Re: message loss for sync producer, acks=2, topic replicas=3
>
> Hi Jiang,
>
> One thing you can try is to set acks=-1, and set the
> r
14 00:06:52
To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org
Subject: Re: message loss for sync producer, acks=2, topic replicas=3
Hi Jiang,
One thing you can try is to set acks=-1, and set the
replica.lag.max.messages properly such that it will not kicks all follower
rep
Hi Jiang,
One thing you can try is to set acks=-1, and set the
replica.lag.max.messages properly such that it will not kicks all follower
replicas immediately under your produce load. Then if one of the follower
replica is lagging and the other is not, this one will be dropped out of
ISR and when
re is used as
"infinite".
we use replica.lag.max.messages="infinite" together with acks=-1. In this
setting, if all brokers are in sync initially, and only one broker is down
afterwards,then there is no message loss, and producers and consumers will not
be blocked.
The abov
ISR more quickly.
>
> Guozhang
>
>
> On Wed, Jul 16, 2014 at 5:44 AM, Jiang Wu (Pricehistory) (BLOOMBERG/ 731
> LEX -) wrote:
>
> > Guozhong,
> >
> > So this is the cause of message loss in my test where acks=2 and
> > replicas=3:
> > At one moment
AM, Jiang Wu (Pricehistory) (BLOOMBERG/ 731
LEX -) wrote:
> Guozhong,
>
> So this is the cause of message loss in my test where acks=2 and
> replicas=3:
> At one moment all 3 replicas, leader L, followers F1 and F2 are in ISR. A
> publisher sends a message m to L. F1 fetche
closely will be dropped out of ISR more quickly.
Guozhang
On Wed, Jul 16, 2014 at 5:44 AM, Jiang Wu (Pricehistory) (BLOOMBERG/ 731
LEX -) wrote:
> Guozhong,
>
> So this is the cause of message loss in my test where acks=2 and
> replicas=3:
> At one moment all 3 replicas, leader L, f
only after the message is committed.
Thanks,
Jun
On Wed, Jul 16, 2014 at 5:44 AM, Jiang Wu (Pricehistory) (BLOOMBERG/ 731
LEX -) wrote:
> Guozhong,
>
> So this is the cause of message loss in my test where acks=2 and
> replicas=3:
> At one moment all 3 replicas, leader L, fol
Guozhong,
So this is the cause of message loss in my test where acks=2 and replicas=3:
At one moment all 3 replicas, leader L, followers F1 and F2 are in ISR. A
publisher sends a message m to L. F1 fetches m. Both L and F1 acknowledge m so
the send() is successful. Before F2 fetches m, L is
-), users@kafka.apache.org
At: Jul 15 2014 16:11:17
That could be the cause, and it can be verified by changing the acks to -1
and checking the data loss ratio then.
Guozhang
On Tue, Jul 15, 2014 at 12:49 PM, Jiang Wu (Pricehistory) (BLOOMBERG/ 731
LEX -) wrote:
> Guozhang,My coworker ca
That could be the cause, and it can be verified by changing the acks to -1
and checking the data loss ratio then.
Guozhang
On Tue, Jul 15, 2014 at 12:49 PM, Jiang Wu (Pricehistory) (BLOOMBERG/ 731
LEX -) wrote:
> Guozhang,My coworker came up with an explaination: at one moment the
> le
Guozhang,My coworker came up with an explaination: at one moment the leader L,
and two followers F1, F2 are all in ISR. The producer sends a message m1 and
receives acks from L and F1. Before the messge is replicated to F2, L is down.
In the following leader election, F2, instead of F1, becomes
PartitionCount:1ReplicationFactor:3
Configs:retention.bytes=100
Thanks,
Jiang
From: users@kafka.apache.org At: Jul 15 2014 13:59:03
To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org
Subject: Re: message loss for sync producer, acks=2, topic replicas=3
che.org At: Jul 15 2014 13:27:50
> To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org
> Subject: Re: message loss for sync producer, acks=2, topic replicas=3
>
> Hello Jiang,
>
> Which version of Kafka are you using, and did you kill the broker with -9?
>
Guozhang,
I'm testing on 0.8.1.1; just kill pid, no -9.
Regards,
Jiang
From: users@kafka.apache.org At: Jul 15 2014 13:27:50
To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org
Subject: Re: message loss for sync producer, acks=2, topic replicas=3
Hello Jiang,
ic with 3 replicas is created. A sync producer with acks=2
> publishes to the topic. A consumer consumes from the topic and tracks
> message ids. During the test, the leader is killed. Both producer and
> consumer continue to run for a while. After the producer stops, the
> consumer re
Hi,
I observed some unexpected message loss in kafka fault tolerant test. In the
test, a topic with 3 replicas is created. A sync producer with acks=2 publishes
to the topic. A consumer consumes from the topic and tracks message ids. During
the test, the leader is killed. Both producer and
ltiple partitions and replication factor 2
> - run producer performance script on 4 VMs in a sync mode with 2 acks to send
> 1M messages
>
>
> -Original Message-
> From: Joel Koshy [mailto:jjkosh...@gmail.com]
> Sent: Wednesday, February 12, 2014 3:40 PM
> To: users@kaf
Thanks Joel!
I found this configuration setting in "Producer Configs". I guess it means each
producer sets this parameter as part of connection settings, like a number of
acks.
I checked the information in Zookeeper and found out that 2 of the brokers are
missing. The VMs with the
0.8. When I configure sync producers
> to expect 2 acks for each "write" request, some of the producers get stuck.
> It looks like broker's response is not delivered back.
> This happened with original Kafka performance tools and with a test tool
> built using a custom
I am running a test deployment of Kafka 0.8. When I configure sync producers to
expect 2 acks for each "write" request, some of the producers get stuck. It
looks like broker's response is not delivered back.
This happened with original Kafka performance tools and with a test tool
gt; From: Jun Rao [mailto:jun...@gmail.com]
> Sent: Monday, February 3, 2014 9:11 PM
> To: users@kafka.apache.org
> Subject: Re: Kafka performance test: "--request-num-acks -1" kills
> throughput
>
> Michael,
>
> Your understanding is mostly correct. For 2, the fol
users@kafka.apache.org
Subject: Re: Kafka performance test: "--request-num-acks -1" kills throughput
Michael,
Your understanding is mostly correct. For 2, the follower will issue another
fetch request as soon as it finishes processing the response of the previous
fetch (by adding data, if any,
+
> replica.fetch.wait.max.ms | throughput for acks=-1|
> throughput for acks=1|
>
> -++--+
>50
> 1311
.
-++--+
replica.fetch.wait.max.ms | throughput for acks=-1|throughput
for acks=1|
-++--+
50
1311
t; replica.lag.time.max.ms=1
> replica.lag.max.messages=4000
>
> In most cases the tests were executed with "out-of-box" settings, which
> don't change "replica" configuration.
>
> We are running these tests on very weak machines. If absolute throughput
&
.
We are running these tests on very weak machines. If absolute throughput
numbers are not as high as in other people's tests, that's understandable. The
main concern is why throughput drops 4-10 times when a number of expected acks
is not 1.
Should we wait for newer versions of Ka
--message-size 1024
> --request-num-acks -1 --sync --messages 10 -threads 1
> --show-detailed-stats --reporting-interval 1000 --topics d2111 | grep -v
> "at "
>
> I assume producer uses default timeout of 3000ms in my tests.
>
> I ran a few data processing ope
/kafka-producer-perf-test.sh --broker-list 10.0.0.8:9092,10.0.0.10:9092
--compression-codec 0 --message-size 1024 --request-num-acks -1 --sync
--messages 10 -threads 1 --show-detailed-stats --reporting-interval 1000
--topics d2111 | grep -v "at "
I assume producer uses default timeout
bin/kafka-producer-perf-test.sh --broker-list 10.0.0.8:9092,
> 10.0.0.10:9092 --compression-codec 0 --message-size 1024
> --request-num-acks 1 --sync --messages 10 -threads 10
> --show-detailed-stats --reporting-interval 1000 --topics d1 | grep -v "at "
>
> Results of 4
ucers even with global locks would not
interfere with each other. Changing a single configuration parameter, a number
of required acks, consistently reduced system throughput in all tests. And this
drop of system throughput is too big to ignore.
Is there a global lock on the server side that contro
--topic d1
Commands to run a test producer looked like this:
bin/kafka-producer-perf-test.sh --broker-list 10.0.0.8:9092,10.0.0.10:9092
--compression-codec 0 --message-size 1024 --request-num-acks 1 --sync
--messages 10 -threads 10 --show-detailed-stats --reporting-interval 1000
multiple platforms: Linux and Windows.
> For test purposes I create topics with 2 replicas and multiple partitions.
> In all deployments running test producers that wait for both replicas' acks
> practically kills Kafka throughput. For example, on the following
> deployment on Linux mac
that can scale. Kafka looks
> like a right system for this role.
>
> I am running performance tests on multiple platforms: Linux and Windows.
> For test purposes I create topics with 2 replicas and multiple partitions.
> In all deployments running test producers that wait for both repli
producers that wait for both replicas' acks
practically kills Kafka throughput. For example, on the following deployment on
Linux machines: 2 Kafka brokers, 1 Zookeeper node, 4 client hosts to create
load, 4 topics with 10 partitions each and 2 replicas
- running tests with "
90 matches
Mail list logo