adixitconfluent commented on code in PR #17870:
URL: https://github.com/apache/kafka/pull/17870#discussion_r1912168874


##########
core/src/test/java/kafka/test/api/ShareConsumerTest.java:
##########
@@ -902,7 +902,7 @@ public void 
testFetchRecordLargerThanMaxPartitionFetchBytes(String persister) th
             shareConsumer.subscribe(Collections.singleton(tp.topic()));
 
             ConsumerRecords<byte[], byte[]> records = 
shareConsumer.poll(Duration.ofMillis(5000));
-            assertEquals(1, records.count());
+            assertEquals(2, records.count());

Review Comment:
   So, earlier this test produced 2 records - 
   1. A small record whose size was lesser than partitionMaxBytes (1MB)
   2. A big record whose size was equal to partitionMaxBytes (1MB)
   
   due to the strict restriction in trunk code, we were able to only fetch 1 
record in the first poll, since fetching the second record as well would have 
violated the partitionMaxBytes limit. With my changes, partitionMaxBytes is no 
longer a criteria while doing fetch, it just has to be within the 
requestMaxBytes limit. Hence, fetching both first and second record do not 
violate the limit requestMaxBytes in single fetch, so 2 records are returned 
now.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to