Hi Jun / Kafka Team,
Do we have any solution for this issue ?
"During zookeeper re-init kafka broker truncates messages and ends up in
loosing records"
I'm ok with with duplicate messages being stored instead of dropping.
Is there any configuration in kafka where follower broker replicates thes
Hi Jun,
Yes, the data is lost during leader broker failure.
But the leader broker failed due to zookeeper session expiry.
GC logs doesn't show any error/warns during this period.
Its not easy reproduce. during long run (>12hrs) with 30k msg/sec load
balanced across 96 partitions, some time in bet
Mazhar,
Let's first confirm if this is indeed a bug. As I mentioned earlier, it's
possible to have message loss with ack=1 when there are (leader) broker
failures. If this is not the case, please file a jira and describe how to
reproduce the problem. Also, it would be useful to know if the message
Hi Jun,
In my earlier runs, I had enabled delivery report (with and without offset
report) facility provided by librdkafka.
The producer has received successful delivery report for the all msg sent
even than the messages where lost.
as you mentioned. producer has nothing to do with this loss of
Mazhar,
With ack=1, whether you lose messages or not is not deterministic. It
depends on the time when the broker receives/acks a message, the follower
fetches the data and the broker fails. So, it's possible that you got lucky
in one version and unlucky in another.
Thanks,
Jun
On Thu, Aug 18,
Hi Jun,
Thanks for clarification, I'll give a try with ack=-1 (in producer).
However, i did a fallback to older version of kafka (*kafka_2.10-0.8.2.1*),
and i don't see this issue (loss of messages).
looks like kafka_2.11-0.9.0.1 has issues(BUG) during replication.
Thanks,
Regards,
Mazhar Shai
Mazhar,
There is probably a mis-understanding. Ack=-1 (or all) doesn't mean waiting
for all replicas. It means waiting for all replicas that are in sync. So,
if a replica is down, it will be removed from the in-sync replicas, which
allows the producer to continue with fewer replicas.
For the conn
I see a bugs raised over the same.
which is still open.
do we have any solution for this ?
https://issues.apache.org/jira/browse/KAFKA-3916
http://mail-archives.apache.org/mod_mbox/kafka-dev/201606.
mbox/%3cjira.12984498.146714867.10722.1467148737...@atlassian.jira%3E
Regards,
Mazhar Shaikh
Hi Jun,
Setting to -1, may solve this issue.
But it will cause producer buffer full in load test resulting to failures
and drop of messages from client(producer side)
Hence, this will not actually solve the problem.
I need to fix this from kafka broker side, so that there is no impact on
producer
Yes, you can try setting it to -1 in 0.8.1, which is the equivalent of
"all" in 0.9 and above.
Thanks,
Jun
On Wed, Aug 17, 2016 at 8:32 AM, Mazhar Shaikh
wrote:
> Hi Jun,
>
> I'm using default configuration (ack=1),
> changing it t0 all or 2 will not help, as the producer queue will be
> exhau
Hi Jun,
I'm using default configuration (ack=1),
changing it t0 all or 2 will not help, as the producer queue will be
exhausted is any kafka broker goes down for long time.
Thanks.
Regards,
Mazhar Shaikh.
On Wed, Aug 17, 2016 at 8:11 PM, Jun Rao wrote:
> Are you using acks=1 or acks=all in
Are you using acks=1 or acks=all in the producer? Only the latter
guarantees acked messages won't be lost after leader failure.
Thanks,
Jun
On Wed, Aug 10, 2016 at 11:41 PM, Mazhar Shaikh
wrote:
> Hi Kafka Team,
>
> I'm using kafka (kafka_2.11-0.9.0.1) with librdkafka (0.8.1) API for
> produce
Hi Tom,
Thank you for responding and sorry for delay.
I'm running with all the default configuration provided by kafka.
I don't have this config elements in my server.properties file.
However the default values specified in kafka documentation are as below (
http://kafka.apache.org/documentat
Are you running with unclean leader election on? Are you setting min in
sync replicas at all?
Can you attach controller and any other logs from the brokers you have?
They would be crucial in debugging this kind of issue.
Thanks
Tom Crayford
Heroku Kafka
On Thursday, 11 August 2016, Mazhar Shaik
14 matches
Mail list logo