We are using the console produce, directly on the machines where we are
experiencing the problem. I just inserted 150 messages in a topic and chose
the partition with more messages to make this analysis, in this case,
partition 15 in broker 1.

The log file:

> kafka-run-class.sh kafka.tools.DumpLogSegments --deep-iteration
> --print-data-log --files
> /var/kafkadata/data01/data/topic-15/00000000000000000059.log
> Dumping /var/kafkadata/data01/data/topic-15/00000000000000000059.log
> Starting offset: 59
> offset: 59 position: 0 CreateTime: 1588151779702 size: 36 magic: 1
> compresscodec: NONE crc: 565272749 isvalid: true
> | offset: 59 CreateTime: 1588151779702 keysize: -1 valuesize: 2 crc:
> 565272749 isvalid: true payload: 12
> offset: 60 position: 36 CreateTime: 1588151799916 size: 36 magic: 1
> compresscodec: NONE crc: 370075951 isvalid: true
> | offset: 60 CreateTime: 1588151799916 keysize: -1 valuesize: 2 crc:
> 370075951 isvalid: true payload: 30
> offset: 61 position: 72 CreateTime: 1588152179129 size: 36 magic: 1
> compresscodec: NONE crc: 2353683039 isvalid: true
> | offset: 61 CreateTime: 1588152179129 keysize: -1 valuesize: 2 crc:
> 2353683039 isvalid: true payload: 36
> offset: 62 position: 108 CreateTime: 1588152202048 size: 36 magic: 1
> compresscodec: NONE crc: 83181941 isvalid: true
> | offset: 62 CreateTime: 1588152202048 keysize: -1 valuesize: 2 crc:
> 83181941 isvalid: true payload: 54
> offset: 63 position: 144 CreateTime: 1588152232426 size: 36 magic: 1
> compresscodec: NONE crc: 1251610227 isvalid: true
> | offset: 63 CreateTime: 1588152232426 keysize: -1 valuesize: 2 crc:
> 1251610227 isvalid: true payload: 72
> offset: 64 position: 180 CreateTime: 1588152250662 size: 36 magic: 1
> compresscodec: NONE crc: 1452283589 isvalid: true
> | offset: 64 CreateTime: 1588152250662 keysize: -1 valuesize: 2 crc:
> 1452283589 isvalid: true payload: 90
> offset: 65 position: 216 CreateTime: 1588152271999 size: 37 magic: 1
> compresscodec: NONE crc: 3155811409 isvalid: true
> | offset: 65 CreateTime: 1588152271999 keysize: -1 valuesize: 3 crc:
> 3155811409 isvalid: true payload: 108
> offset: 66 position: 253 CreateTime: 1588152304661 size: 37 magic: 1
> compresscodec: NONE crc: 2526532572 isvalid: true
> | offset: 66 CreateTime: 1588152304661 keysize: -1 valuesize: 3 crc:
> 2526532572 isvalid: true payload: 126
> offset: 67 position: 290 CreateTime: 1588152330022 size: 37 magic: 1
> compresscodec: NONE crc: 4266477330 isvalid: true
> | offset: 67 CreateTime: 1588152330022 keysize: -1 valuesize: 3 crc:
> 4266477330 isvalid: true payload: 144


The .index file is empty:

> kafka-run-class.sh kafka.tools.DumpLogSegments --deep-iteration
> --print-data-log --files
> /var/kafkadata/data01/data/topic-15/00000000000000000059.index
> Dumping /var/kafkadata/data01/data/topic-15/00000000000000000059.index
> offset: 59 position: 0


The .timeindex file return this but from what I found on the internet you
can have this error when using DumpLogsegments in the active segment:

> kafka-run-class.sh kafka.tools.DumpLogSegments --deep-iteration
> --print-data-log --files
> /var/kafkadata/data01/data/topic-15/00000000000000000059.timeindex

Dumping
> /var/kafkadata/data01/data/topic-15/00000000000000000059.timeindextimestamp:
> 0 offset: 59

Found timestamp mismatch in
> :/var/kafkadata/data01/data/topic-15/00000000000000000059.timeindex

Index timestamp: 0, log timestamp: 1588151779702


*The consumer gets messages from 53 to 67, which is strange because on this
broker the log starts from 59 and all the brokers should have the
information replicated.*

So I have injected around 1000 messages more and got this on the .timeindex
file starting at offset 144?

> kafka-run-class.sh kafka.tools.DumpLogSegments --deep-iteration
> --print-data-log --files
> /var/kafkadata/data01/data/topic-15/00000000000000000059.timeindex
> Dumping /var/kafkadata/data01/data/topic-15/00000000000000000059.timeindex
> timestamp: 1588157171331 offset: 144
> timestamp: 1588157306199 offset: 147
> timestamp: 1588157358211 offset: 150
> timestamp: 1588157465320 offset: 155
> timestamp: 1588157467376 offset: 157
> timestamp: 1588157469434 offset: 160
> timestamp: 1588157471324 offset: 163
> timestamp: 1588157474553 offset: 168
> timestamp: 1588157476271 offset: 171
> timestamp: 1588157478642 offset: 174
> timestamp: 1588157481068 offset: 178
> timestamp: 1588157484115 offset: 181
> timestamp: 1588157486643 offset: 184
> timestamp: 1588157489433 offset: 188



So it looks like we don't have any time index from 59 to 144... If I made a
rolling deploy before 144 there was no timestamp for the remaining offsets
and I would lose all messages due to cleanup? Any thoughts on this?

Thanks in advance.

Reply via email to