Hi Gaurav (and others in this thread),
My apologies for the late reply, This e-mail appears to have missed my
inbox somehow.
>From your logs, It appears that there was a leadership change happening on
the Kafka side for this topic partition? If so, I would actually expect the
follower's offset to
Some more logs from Kakja
WARN [2017-05-01 15:21:19,132]
kafka.server.ReplicaFetcherThread:[Logging$class:warn:83] -
[ReplicaFetcherThread-0-3] - [ReplicaFetcherThread-0-3], Replica 0 for
partition [Topic3,17] reset its fetch offset from 45039137 to current
leader 3's latest offset 45039132
INFO
Looking further, the reason for this "jump back" seems not so straight
forward:
In Kafka's Simple Consumer code:
private def sendRequest(request: RequestOrResponse): NetworkReceive = {
lock synchronized {
var response: NetworkReceive = null
try {
getOrMakeConnection()
blockin
This also seems somewhat related to the mail on this group a few days back
- with subject 'Messages lost after broker failure'.
If someone had set auto.offset.reset to largest, then reverse would happen
- i.e samza skipping over kafka partition queue in face of such failures.
On Tue, May 2, 2017