Hi all
I am confused about the Log Compaction logic,use OffsetMap to deduplicating the
log. in my opinion when there is a hash conflict , data may be lost
Eg: Record1(key1,offset1) Record2(key2,offset2)
Conditionhash(key1) == hash(key2) && (offset1 < offset2)
Hi Folks,
I am confused about the code below ,why the IO thread set the daemon ?
in my thought , daemon thread is not suitable for some importment work
def createHandler(id: Int): Unit = synchronized {
runnables += new KafkaRequestHandler(id, brokerId, aggregateIdleMeter,
threadPoolSize,
Please grant permission for Create KIP to wiki ID: ruanliang_hualun