skoppu22 commented on code in PR #109: URL: https://github.com/apache/cassandra-analytics/pull/109#discussion_r2095858316
########## cassandra-analytics-integration-tests/src/test/java/org/apache/cassandra/analytics/SharedClusterSparkIntegrationTestBase.java: ########## @@ -171,28 +173,35 @@ public void checkSmallDataFrameEquality(Dataset<Row> expected, Dataset<Row> actu } } - public void validateWritesWithDriverResultSet(List<Row> sourceData, ResultSet queriedData, - Function<com.datastax.driver.core.Row, String> rowFormatter) + public void validateWritesWithDriverResultSet(List<Row> sparkData, ResultSet driverData, + Function<com.datastax.driver.core.Row, String> driverRowFormatter) { - Set<String> actualEntries = new HashSet<>(); - queriedData.forEach(row -> actualEntries.add(rowFormatter.apply(row))); + Set<String> driverEntries = new HashSet<>(); + driverData.forEach(row -> driverEntries.add(driverRowFormatter + .apply(row) + // Driver Codec writes "NULL" for null value. Spark DF writes "null". + .replace("NULL", "null") + // driver writes lists as [] and sets as {}, + // whereas spark entries have the same type WrappedArray for both lists and sets + .replace('[', '{') + .replace(']', '}'))); Review Comment: Which function do you mean? We repeat the same logic for all driver entries formatters for UDTs (and also Tuples which is coming up today). So keeping it in one place instead of repeating the same in each formatter. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org