HeartSaVioR commented on code in PR #49662: URL: https://github.com/apache/spark/pull/49662#discussion_r1936652586
########## connector/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaOffsetReaderSuite.scala: ########## @@ -203,6 +205,41 @@ class KafkaOffsetReaderSuite extends QueryTest with SharedSparkSession with Kafk KafkaOffsetRange(tp2, 0, 3, None)).sortBy(_.topicPartition.toString)) } + testWithAllOffsetFetchingSQLConf( + "KAFKA_TIMESTAMP_OFFSET_DOES_NOT_MATCH_ASSIGNED error class" + ) { + val topic = newTopic() + testUtils.createTopic(topic, partitions = 3) + val reader = createKafkaReader(topic, minPartitions = Some(4)) + + // There are three topic partitions, but we only include two in offsets. Review Comment: nit: Shall we generalize the verification code and have two different testsets 1) specifying less partitions 2) specifying more partitions? ########## connector/kafka-0-10-sql/src/main/resources/error/kafka-error-conditions.json: ########## @@ -30,6 +30,13 @@ "Specified: <specifiedPartitions> Assigned: <assignedPartitions>" ] }, + "KAFKA_TIMESTAMP_OFFSET_DOES_NOT_MATCH_ASSIGNED" : { + "message" : [ + "Partitions specified for Kafka timestamp based <position> offsets don't match what are assigned. Maybe topic partitions are created ", Review Comment: > Maybe topic partitions are created or deleted while the query is running. Technically, the starting offset and end offset (for batch) are only valid for the first microbatch (or batch query). That said, there is still a chance of race condition, but it's more likely that users missed to specify some partition(s). The above message rules out the second case which I think is more likely (and original message was meant to be). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org