apoorvmittal10 commented on code in PR #20253:
URL: https://github.com/apache/kafka/pull/20253#discussion_r2239169994


##########
core/src/test/java/kafka/server/share/SharePartitionTest.java:
##########
@@ -1510,6 +1510,150 @@ public void 
testAcquireBatchSkipWithBatchSizeAndEndOffsetLargerThanFirstBatch()
         assertTrue(sharePartition.cachedState().containsKey(12L));
     }
 
+    @Test
+    public void testAcquireWithMaxInFlightMessagesAndTryAcquireNewBatch() {
+        SharePartition sharePartition = SharePartitionBuilder.builder()
+            .withState(SharePartitionState.ACTIVE)
+            .withSharePartitionMetrics(sharePartitionMetrics)
+            .withMaxInflightMessages(20)
+            .build();
+
+        // Acquire records, all 10 records should be acquired as within 
maxInflightMessages limit.
+        List<AcquiredRecords> acquiredRecordsList = 
fetchAcquiredRecords(sharePartition.acquire(
+                MEMBER_ID,
+                BATCH_SIZE,
+                500 /* Max fetch records */,
+                DEFAULT_FETCH_OFFSET,
+                fetchPartitionData(memoryRecords(10, 0), 0),
+                FETCH_ISOLATION_HWM),
+            10);
+        // Validate all 10 records will be acquired as the maxInFlightMessages 
is 20.
+        assertArrayEquals(expectedAcquiredRecord(0, 9, 1).toArray(), 
acquiredRecordsList.toArray());
+        assertEquals(10, sharePartition.nextFetchOffset());
+
+        // Create 4 batches of records.
+        ByteBuffer buffer = ByteBuffer.allocate(4096);
+        memoryRecordsBuilder(buffer, 5, 10).close();
+        memoryRecordsBuilder(buffer, 10, 15).close();
+        memoryRecordsBuilder(buffer, 5, 25).close();
+        memoryRecordsBuilder(buffer, 2, 30).close();
+
+        buffer.flip();
+
+        MemoryRecords records = MemoryRecords.readableRecords(buffer);
+
+        // Acquire records, should be acquired till maxInFlightMessages i.e. 
20 records. As second batch
+        // is ending at 24 offset, hence additional 15 records will be 
acquired.
+        acquiredRecordsList = fetchAcquiredRecords(sharePartition.acquire(
+                MEMBER_ID,
+                BATCH_SIZE,
+                500 /* Max fetch records */,
+                DEFAULT_FETCH_OFFSET,
+                fetchPartitionData(records, 0),

Review Comment:
   That's log start offset. And the reason I kept it at 0 is because the 
previous acquire is from 0th offset. However for share partition this log start 
offset doesn't matter as of now. We update logStartOffset when 
`OFFSET_OUT_OF_RANGE` exception is encountered, then we fetch the offset for 
earliest timestamp.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to