[ 
https://issues.apache.org/jira/browse/KAFKA-3932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655279#comment-16655279
 ] 

CHIENHSING WU commented on KAFKA-3932:
--------------------------------------

I encountered the same issue as well. Upon studying the source code and the 
[PIP|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-41%3A+KafkaConsumer+Max+Records]|https://cwiki.apache.org/confluence/display/KAFKA/KIP-41%3A+KafkaConsumer+Max+Records].],
 I think the issues is the statement in PIP: "As before, we'd keep track of 
*which partition we left off* at so that the next iteration would *begin 
there*." I think it should *NOT* use the last partition in the next iteration; 
*it should pick the next one instead.* 

The simplest solution to impose the order to pick the next one is to use the 
order the consumer.internals.Fetcher receives the partition messages, as 
determined by *completedFetches* queue in that class. To avoid parsing the 
partition messages repeatedly. we can *save those parsed fetches to a list and 
maintain the next partition to get messages there.* 

Does it sound like a good approach? If this is not the right place to discuss 
the design please let me know where to engage. If this is agreeable I can 
contribute the implementation.

> Consumer fails to consume in a round robin fashion
> --------------------------------------------------
>
>                 Key: KAFKA-3932
>                 URL: https://issues.apache.org/jira/browse/KAFKA-3932
>             Project: Kafka
>          Issue Type: Bug
>          Components: consumer
>    Affects Versions: 0.10.0.0
>            Reporter: Elias Levy
>            Priority: Major
>
> The Java consumer fails consume messages in a round robin fashion.  This can 
> lead to an unbalance consumption.
> In our use case we have a set of consumer that can take a significant amount 
> of time consuming messages off a topic.  For this reason, we are using the 
> pause/poll/resume pattern to ensure the consumer session is not timeout.  The 
> topic that is being consumed has been preloaded with message.  That means 
> there is a significant message lag when the consumer is first started.  To 
> limit how many messages are consumed at a time, the consumer has been 
> configured with max.poll.records=1.
> The first initial observation is that the client receive a large batch of 
> messages for the first partition it decides to consume from and will consume 
> all those messages before moving on, rather than returning a message from a 
> different partition for each call to poll.
> We solved this issue by configuring max.partition.fetch.bytes to be small 
> enough that only a single message will be returned by the broker on each 
> fetch, although this would not be feasible if message size were highly 
> variable.
> The behavior of the consumer after this change is to largely consume from a 
> small number of partitions, usually just two, iterating between them, until 
> it exhausts them, before moving to another partition.   This behavior is 
> problematic if the messages have some rough time semantics and need to be 
> process roughly time ordered across all partitions.
> It would be useful if the consumer has a pluggable API that allowed custom 
> logic to select which partition to consume from next, thus enabling the 
> creation of a round robin partition consumer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to