chengcongchina commented on code in PR #4334:
URL: https://github.com/apache/flink-cdc/pull/4334#discussion_r2985558116
##########
flink-cdc-connect/flink-cdc-source-connectors/flink-connector-mysql-cdc/src/test/java/org/apache/flink/cdc/connectors/mysql/debezium/reader/SnapshotSplitReaderTest.java:
##########
@@ -586,6 +586,21 @@ void testMultipleSplitsWithBackfill() throws Exception {
"UPDATE " + tableId + " SET address =
'Beijing' WHERE id = 103");
mySqlConnection.commit();
} else if (split.splitId().equals(tableId + ":1")) {
+ // To verify that FLINK-39315 is fixed, generate
sufficient binlog events,
+ // so that the MySqlBinlogSplitReadTask runs long
enough to exercise the
+ // context-running checks in binlog reading backfill
phase.
+ for (int i = 0; i < 1000; i++) {
+ mySqlConnection.execute(
+ "UPDATE "
+ + tableId
+ + " SET address = 'Beijing' WHERE
id = 106");
+ mySqlConnection.commit();
+ mySqlConnection.execute(
+ "UPDATE "
+ + tableId
+ + " SET address = 'Shanghai' WHERE
id = 106");
+ mySqlConnection.commit();
+ }
Review Comment:
I reduced the loop to 100 updates, which is still sufficient to reproduce
the issue when the fix is commented out. This should make the unit test
significantly faster and less flaky on CI.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]