The following is the comment in the code. Do you see any ZK session
expiration in the broker log around that time?
// NOTE: the above write can fail only if the current controller
lost its zk session and the new controller
// took over and initialized this partition. This can h
Not sure what happened. It could be that the broker received messages with
offset 5 to 10 at one time, but lost them later during unclean leader
election. If this is the case, you will see sth like "No broker in ISR is
alive for %s. Elect leader %d from live brokers %s. There's potential data
loss.
I was thinking more about this. Successfully writing a block of msgs to HDFS
represents that atomic commit, downstream. However, it is not a 2 or 3-phase
transaction, with rollback. The issue is the difference in scope of a
downstream aggregate commit and an exactly once upstream commit. I s
testSendToPartition is suppose to fail now since it detects some bugs in
the new producer code. We are working on it.
For other test failures, I just redid the unit tests from trunk but does
not see the failures. My test process:
1. Check out a new repository of trunk.
2. Create ~/.gradle/gradle.
Hello Jerry,
Does this answers your question?
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whypartitionleadersmigratethemselvessometimes
?
Guozhang
On Mon, Feb 17, 2014 at 5:46 PM, 陈小军 wrote:
> add I found some log from server.log
>
> [2014-02-18 00:27:54,460] INFO re-registering
add I found some log from server.log
[2014-02-18 00:27:54,460] INFO re-registering broker info in ZK for broker 2
(kafka.server.KafkaHealthcheck)
[2014-02-18 00:27:54,477] INFO Registered broker 2 at path /brokers/ids/2 with
address xseed133.kdev.nhnsystem.com:9093. (kafka.utils.ZkUtils$)
[201
Hi,
I running kafka 0.8.1 branch code. in my test env, I have 3 servers (broker
1, 2, 3), and create three topics with 3 partitions and 2 replica for each
partition. at beginning each partition's leader is allocated to different
server.
topicA : partition 0 --> leader : 1; partition 1 -