hlteoh37 commented on code in PR #190:
URL: 
https://github.com/apache/flink-connector-aws/pull/190#discussion_r1979590842


##########
flink-connector-aws/flink-connector-dynamodb/src/test/java/org/apache/flink/connector/dynamodb/source/reader/PollingDynamoDbStreamsShardSplitReaderTest.java:
##########
@@ -320,10 +336,103 @@ record ->
         for (int i = 0; i < 10; i++) {
             RecordsWithSplitIds<Record> records = splitReader.fetch();
             fetchedRecords.addAll(readAllRecords(records));
+            Thread.sleep(NON_EMPTY_POLLING_DELAY_MILLIS.toMillis());
         }
         
assertThat(fetchedRecords).containsExactly(recordsFromSplit3.toArray(new 
Record[0]));
     }
 
+    @Test
+    void testPollingDelayForEmptyRecords() throws Exception {
+        // Given assigned split with no records
+        testStreamProxy.addShards(TEST_SHARD_ID);
+        splitReader.handleSplitsChanges(
+                new 
SplitsAddition<>(Collections.singletonList(getTestSplit(TEST_SHARD_ID))));
+
+        // First poll - should return empty records
+        RecordsWithSplitIds<Record> firstPoll = splitReader.fetch();
+        assertThat(firstPoll.nextRecordFromSplit()).isNull();
+        assertThat(firstPoll.nextSplit()).isNull();
+        assertThat(firstPoll.finishedSplits()).isEmpty();
+
+        // Immediate second poll - should return empty due to polling delay

Review Comment:
   I just meant let's make sure we ensure that this test is not flaky if we run 
on a slow host



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to