[ https://issues.apache.org/jira/browse/KAFKA-1633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14138603#comment-14138603 ]
gautham varada commented on KAFKA-1633: --------------------------------------- let me explain this a bit better. Say I have my Jmeter app send 100 events to the kafka broker. The brokers acks 60 events before both are killed. Should I expect 60 events in the kafka logs ? if yes then I dont see this behaviour , its always less than 60, when the producer retry is set to the default value of 3. I repeat the same test with the retry count as 1 and I dont loose any messages. > Data loss if broker is killed > ----------------------------- > > Key: KAFKA-1633 > URL: https://issues.apache.org/jira/browse/KAFKA-1633 > Project: Kafka > Issue Type: Bug > Components: producer > Affects Versions: 0.8.1.1 > Environment: centos 6.3, open jdk 7 > Reporter: gautham varada > Assignee: Jun Rao > > We have a 2 node kafka cluster, we experienced data loss when we did a kill > -9 on the brokers. We also found a work around to prevent this loss. > Replication factor :2, 4 partitions > Steps to reproduce > 1. Create a 2 node cluster with replication factor 2, num partitions 4 > 2. We used Jmeter to pump events > 3. We used kafka web console to inspect the log size after the test > During the test, we simultaneously killed the brokers using kill -9 and we > tallied the metrics reported by jmeter and the size we observed in the web > console, we lost tons of messages. > We went back and set the Producer retry to 1 instead of the default 3 and > repeated the above tests and we did not loose a single message. > We repeated the above tests with the Producer retry set to 3 and 1 with a > single broker and we observed data loss when the retry was 3 and no loss when > the retry was 1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)