[ 
https://issues.apache.org/jira/browse/KAFKA-6854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-6854.
------------------------------------
    Resolution: Fixed

> Log cleaner fails with transaction markers that are deleted during clean
> ------------------------------------------------------------------------
>
>                 Key: KAFKA-6854
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6854
>             Project: Kafka
>          Issue Type: Task
>          Components: core
>    Affects Versions: 1.1.0
>            Reporter: Rajini Sivaram
>            Assignee: Rajini Sivaram
>            Priority: Blocker
>             Fix For: 2.0.0, 1.0.2, 1.1.1
>
>
> Log cleaner grows buffers when `result.messagesRead` is zero. In a typical 
> scenario, this is a result of source buffer being too small to read the first 
> batch. The buffer is then doubled in size until one batch can be read, up to 
> a maximum of `max.message.size`. There are issues with the maximum message 
> size used in calculations as reported in KAFKA-6834. But there is a separate 
> issue with the use of `result.messagesRead` when transactions are used. This 
> contains the number of filtered messages read from source which can be zero 
> when a transaction control marker is discarded. Log cleaner incorrectly 
> assumes that messages were not read because the buffer was too small. This 
> can result in the log cleaner attempting to grow buffers to double the buffer 
> size unnecessarily, failing with an exception if the buffer is already 
> `max.message.bytes`. This kills the log cleaner.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to