01:13, M. Manna wrote:
> Hello,
>
> I have recently had some message loss for a consumer group under kafka
> 2.3.0.
>
> The client I am using is still in 2.2.0. Here is how the problem can be
> reproduced,
>
> 1) The messages were sent to 4 consumer groups, 3 of them were
Hello,
I have recently had some message loss for a consumer group under kafka
2.3.0.
The client I am using is still in 2.2.0. Here is how the problem can be
reproduced,
1) The messages were sent to 4 consumer groups, 3 of them were live and 1
was down
2) When the consumer group came back online
nks
>> Eno
>> > On 18 Nov 2016, at 13:49, Ryan Slade wrote:
>> >
>> > Hi
>> >
>> > I'm trialling Kafka Streaming for a large stream processing job, however
>> > I'm seeing message loss even in the simplest scenarios.
>> >
t; Eno
> > On 18 Nov 2016, at 13:49, Ryan Slade wrote:
> >
> > Hi
> >
> > I'm trialling Kafka Streaming for a large stream processing job, however
> > I'm seeing message loss even in the simplest scenarios.
> >
> > I've tried to boil
2016, at 13:49, Ryan Slade wrote:
>
> Hi
>
> I'm trialling Kafka Streaming for a large stream processing job, however
> I'm seeing message loss even in the simplest scenarios.
>
> I've tried to boil it down to the simplest scenario where I see loss which
>
Hi
I'm trialling Kafka Streaming for a large stream processing job, however
I'm seeing message loss even in the simplest scenarios.
I've tried to boil it down to the simplest scenario where I see loss which
is the following:
1. Ingest messages from an input stream (String, St
one, you wont see any error
> >in the producer, cause it succeeded in sending it to the broker. You most
> >likely will see some error on the broker, because it is not the leader.
> >
> >On Fri, Jun 17, 2016 at 5:19 AM Gulia, Vikram >
> >wrote:
> >
> >>
the leader.
>
>On Fri, Jun 17, 2016 at 5:19 AM Gulia, Vikram
>wrote:
>
>> Hi Users, I am facing message loss while using kafka v 0.8.2.2. Please
>>see
>> details below and help me if you can.
>>
>> Issue: 2 messages produced to same partition one by one Kafka
cause it is not the leader.
On Fri, Jun 17, 2016 at 5:19 AM Gulia, Vikram wrote:
> Hi Users, I am facing message loss while using kafka v 0.8.2.2. Please see
> details below and help me if you can.
>
> Issue: 2 messages produced to same partition one by one – Kafka producer
> retu
Hi Users, I am facing message loss while using kafka v 0.8.2.2. Please see
details below and help me if you can.
Issue: 2 messages produced to same partition one by one – Kafka producer
returns same offset back which means message produced earlier is
lost.<http://stackoverflow.com/questi
ons that consume all messages from one Kafka cluster.
> We found that the MessagesPerSec metric started to diverge after some time.
> One of them matches the MessagesInPerSec metric from the Kafka broker,
> while the other is lower than the broker metric and appears to have some
&g
message loss.
Both of them have the same OwnedPartitionsCount.
Both of them have 0 MaxLag.
How is that possible? Anything we should look at? Is the MaxLag metric not
telling the truth?
Thanks,
Allen
and
replay them when the brokers recover.
On Fri, Jun 26, 2015 at 6:11 AM bit1...@163.com wrote:
> Can someone explain this ? Thanks!
>
>
>
> bit1...@163.com
>
> From: bit1...@163.com
> Date: 2015-06-25 11:57
> To: users
> Subject: Message loss due to zookeeper ense
Can someone explain this ? Thanks!
bit1...@163.com
From: bit1...@163.com
Date: 2015-06-25 11:57
To: users
Subject: Message loss due to zookeeper ensemble doesn't work
Hi,
I have the offset saved in zookeeper. Because zookeeper quorum doesn't work for
a short time(leader is do
Hi,
I have the offset saved in zookeeper. Because zookeeper quorum doesn't work for
a short time(leader is down and new leader election).Then there is a chance
that the offset doesn't write to the Zookeeper, which will lose data.
I would ask whether Kafka provides some mechism for this kind of
...@gmail.com]
Sent: Tuesday, April 28, 2015 5:07 AM
To: users@kafka.apache.org
Subject: Kafka - preventing message loss
I am trying to setup a cluster where messages should never be lost once it
is published. Say if I have 3 brokers, and if I configure the replicas to
be 3 also, and if I consider max
I am trying to setup a cluster where messages should never be lost once it
is published. Say if I have 3 brokers, and if I configure the replicas to
be 3 also, and if I consider max failures as 1, and I can achieve the above
requirement. But when I post a message, how do I prevent kafka from
accept
seem to solve the problem.
> Regards,
> Jiang
>
> From: users@kafka.apache.org At: Jul 19 2014 00:06:52
> To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org
> Subject: Re: message loss for sync producer, acks=2, topic replicas=3
>
> Hi Jiang,
>
&g
From: users@kafka.apache.org At: Jul 19 2014 00:06:52
> To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org
> Subject: Re: message loss for sync producer, acks=2, topic replicas=3
>
> Hi Jiang,
>
> One thing you can try is to set acks=-1, and set the
> r
14 00:06:52
To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org
Subject: Re: message loss for sync producer, acks=2, topic replicas=3
Hi Jiang,
One thing you can try is to set acks=-1, and set the
replica.lag.max.messages properly such that it will not kicks all follower
rep
" together with acks=-1. In this
> setting, if all brokers are in sync initially, and only one broker is down
> afterwards,then there is no message loss, and producers and consumers will
> not be blocked.
>
> The above is the basic requirment to a fault tolerant system. In more
&
re is used as
"infinite".
we use replica.lag.max.messages="infinite" together with acks=-1. In this
setting, if all brokers are in sync initially, and only one broker is down
afterwards,then there is no message loss, and producers and consumers will not
be blocked.
The abov
=-1 with replica.lag.max.messages=1. In this
> config no message loss was found.
>
> This is the only config we found to satisfy 1. no message loss and 2.
> service keeps available when 1 single broker is down. Are there other
> configs that can achieve the same, or stronger con
We tested ack=-1 with replica.lag.max.messages=1. In this config no
message loss was found.
This is the only config we found to satisfy 1. no message loss and 2. service
keeps available when 1 single broker is down. Are there other configs that can
achieve the same, or stronger
closely will be dropped out of ISR more quickly.
Guozhang
On Wed, Jul 16, 2014 at 5:44 AM, Jiang Wu (Pricehistory) (BLOOMBERG/ 731
LEX -) wrote:
> Guozhong,
>
> So this is the cause of message loss in my test where acks=2 and
> replicas=3:
> At one moment all 3 replicas, leader L, f
only after the message is committed.
Thanks,
Jun
On Wed, Jul 16, 2014 at 5:44 AM, Jiang Wu (Pricehistory) (BLOOMBERG/ 731
LEX -) wrote:
> Guozhong,
>
> So this is the cause of message loss in my test where acks=2 and
> replicas=3:
> At one moment all 3 replicas, leader L, fol
Guozhong,
So this is the cause of message loss in my test where acks=2 and replicas=3:
At one moment all 3 replicas, leader L, followers F1 and F2 are in ISR. A
publisher sends a message m to L. F1 fetches m. Both L and F1 acknowledge m so
the send() is successful. Before F2 fetches m, L is
When ack=-1 and the publisher thread number is high, it always happens that
only the leader remains in ISR and shutting down the leader will cause message
loss.
The leader election code shows that the new leader will be the first alive
broker in the ISR list. So it's possible the new l
ot cause?
> Thanks,
> Jiang
>
> From: users@kafka.apache.org At: Jul 15 2014 15:05:25
> To: users@kafka.apache.org
> Subject: Re: message loss for sync producer, acks=2, topic replicas=3
>
> Guozhang,
>
> Please find the config below:
>
> Producer:
>
the leader, and
loses m1 somehow.
Could that be the root cause?
Thanks,
Jiang
From: users@kafka.apache.org At: Jul 15 2014 15:05:25
To: users@kafka.apache.org
Subject: Re: message loss for sync producer, acks=2, topic replicas=3
Guozhang,
Please find the config below:
Producer:
props.put
PartitionCount:1ReplicationFactor:3
Configs:retention.bytes=1000000
Thanks,
Jiang
From: users@kafka.apache.org At: Jul 15 2014 13:59:03
To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org
Subject: Re: message loss for sync producer, acks=2, topic replicas=3
che.org At: Jul 15 2014 13:27:50
> To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org
> Subject: Re: message loss for sync producer, acks=2, topic replicas=3
>
> Hello Jiang,
>
> Which version of Kafka are you using, and did you kill the broker with -9?
>
Guozhang,
I'm testing on 0.8.1.1; just kill pid, no -9.
Regards,
Jiang
From: users@kafka.apache.org At: Jul 15 2014 13:27:50
To: JIANG WU (PRICEHISTORY) (BLOOMBERG/ 731 LEX -), users@kafka.apache.org
Subject: Re: message loss for sync producer, acks=2, topic replicas=3
Hello Jiang,
Hello Jiang,
Which version of Kafka are you using, and did you kill the broker with -9?
Guozhang
On Tue, Jul 15, 2014 at 9:23 AM, Jiang Wu (Pricehistory) (BLOOMBERG/ 731
LEX -) wrote:
> Hi,
> I observed some unexpected message loss in kafka fault tolerant test. In
> the test, a top
Hi,
I observed some unexpected message loss in kafka fault tolerant test. In the
test, a topic with 3 replicas is created. A sync producer with acks=2 publishes
to the topic. A consumer consumes from the topic and tracks message ids. During
the test, the leader is killed. Both producer and
Also if you use HEAD, you can create more partitions at runtime, you just
need dynamic partitioner class I think
On Thu, Nov 14, 2013 at 7:23 AM, Neha Narkhede wrote:
> There is no way to delete topics in Kafka yet. You can add partitions to
> existing topics, but you may have to use 0.8 HEAD si
There is no way to delete topics in Kafka yet. You can add partitions to
existing topics, but you may have to use 0.8 HEAD since we have fixed a few
bugs on the consumer.
You can read about add partitions here
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-5.A
Hi team,
We are using beta1. I am going to delete all topics and create them with more
partitions.
But I don't want to lose any messages.
Assume the consumers are online all the time for the following steps. The
consumer's
auto.offset.reset is set to largest.
1 stop publishing to the brokers.
I agree with you. If we include that knob, applications can choose their
consistency vs availability tradeoff according to the respective
requirements. I will file a JIRA for this.
Thanks,
Neha
On Thu, Aug 22, 2013 at 2:10 PM, Scott Clasen wrote:
> +1 for that knob on a per topic basis, choosi
+1 for that knob on a per topic basis, choosing consistency over availability
would open kafka to more use cases no?
Sent from my iPhone
On Aug 22, 2013, at 1:59 PM, Neha Narkhede wrote:
> Scott,
>
> Kafka replication aims to guarantee that committed writes are not lost. In
> other words, as
Scott,
Kafka replication aims to guarantee that committed writes are not lost. In
other words, as long as leader can be transitioned to a broker that was in
the ISR, no data will be lost. For increased availability, if there are no
other brokers in the ISR, we fall back to electing a broker that i
So looks like there is a jespen post coming on kafka 0.8 replication, based
on this thats circulating on twitter. https://www.refheap.com/17932/raw
Understanding that kafka isnt designed particularly to be partition
tolerant, the result is not completely surprising.
But my question is, is there s
Hi Boris,
In Kafka 0.7, the producer does not get any ACK from the server, so
the protocol is fire and forget. What could happen is if a broker is
shutting down, the messages might still live in the producer's socket
buffer and the server can shutdown before getting a chance to read the
producer's
Hi, All
We encounter an issue with Kafka of messages loss:
We have Kafka 0.7.0 installed in cluster mode on three nodes(on partition
per node configured).
We are using kafka java API to send bulk of messages(1000 messages in each)
in sync mode.
After the last bulk is sent we close the producer.
44 matches
Mail list logo