[ 
https://issues.apache.org/jira/browse/KAFKA-18216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17941716#comment-17941716
 ] 

Lianet Magrans commented on KAFKA-18216:
----------------------------------------

Also, from a quick look this seems related/dup of KAFKA-18217 maybe? Just 
heads-up in case it helps looking at the info there too, and we can probably 
link them once we understand better. 

> High water mark or last stable offset aren't always updated after a fetch 
> request is completed
> ----------------------------------------------------------------------------------------------
>
>                 Key: KAFKA-18216
>                 URL: https://issues.apache.org/jira/browse/KAFKA-18216
>             Project: Kafka
>          Issue Type: Improvement
>          Components: clients, consumer
>            Reporter: Philip Nee
>            Assignee: TengYao Chi
>            Priority: Minor
>              Labels: consumer-threading-refactor
>             Fix For: 4.1.0
>
>
> We've noticed AsyncKafkaConsumer doesn't always update the high water 
> mark/LSO followed by handling a successful fetch response. And we know 
> consumer lag metrics is calculated by HWM/LSO - current fetched position.  We 
> are suspecting this could have a subtle effect into how consumer lag is 
> recorded, which might have a slight impact into the accuracy of client 
> metrics reporting.
> The consumer records consumer lag when reading the fetched record 
> The consumer updates the HWM/LSO when the background thread completes the 
> fetched request.
> In the original implementation, the fetcher consistently updates the HWM/LSO 
> after handling the completed fetch request.
> In the new implementation, due to the async threading model, we can't 
> guarantee the sequence of the event.
> This defect is affecting neither performance nor correctness and is therefore 
> marked as "Minor"
>  
> This can be easily reproduced using the java-produce-consumer-demo.sh 
> example.  Ensure to produce enough records (I use 200000000 records, less is 
> fine as well).  Custom logging is required.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to