HeartSaVioR commented on code in PR #52445:
URL: https://github.com/apache/spark/pull/52445#discussion_r2386974052
##########
sql/api/src/main/scala/org/apache/spark/sql/streaming/progress.scala:
##########
@@ -258,13 +258,26 @@ class SourceProgress protected[spark] (
("numInputRows" -> JInt(numInputRows)) ~
("inputRowsPerSecond" -> safeDecimalToJValue(inputRowsPerSecond)) ~
("processedRowsPerSecond" ->
safeDecimalToJValue(processedRowsPerSecond)) ~
- ("metrics" -> safeMapToJValue[String](metrics, s => JString(s)))
+ ("metrics" -> safeMapToJValue[String](
+ metrics,
+ (metricsName, s) =>
+ // SPARK-53690:
Review Comment:
Technically speaking, we shouldn't have a Kafka specific handling in here
and instead should build some sort of API to indicate whether we have to remove
exponential format on specific metric or not.
But it's a single occurrence and Kafka is a built-in connector, so I guess
it is OK. Safer approach is to check whether the source is Kafka, but maybe the
metric name is not very common to be conflicted with others.
##########
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQueryStatusAndProgressSuite.scala:
##########
@@ -436,6 +436,70 @@ class StreamingQueryStatusAndProgressSuite extends
StreamTest with Eventually wi
processedRowsPerSecondJSON shouldBe processedRowsPerSecondExpected +-
epsilon
}
+ test("SPARK-53690: avgOffsetsBehindLatest should never be in scientific
notation") {
+ val progress = testProgress5.jsonValue
+ val progressPretty = testProgress5.prettyJson
+
+ // Actual values
+ val avgOffsetsBehindLatest: Double = 2.8366294E8
+
+ // Get values from progress metrics JSON and cast back to Double
+ // for numeric comparison
+ val metricsJSON = (progress \ "sources")(0) \ "metrics"
+ val avgOffsetsBehindLatestJSON = (metricsJSON \ "avgOffsetsBehindLatest")
+ .values.toString
+
+ // Get expected values after type casting
+ val avgOffsetsBehindLatestExpected = BigDecimal(avgOffsetsBehindLatest)
+ .setScale(1, RoundingMode.HALF_UP).toDouble
+
+ // This should fail if avgOffsetsBehindLatest contains E notation
+ avgOffsetsBehindLatestJSON should not include "E"
+
+ // Value in progress metrics should be equal to the Decimal conversion of
the same
+ // Using epsilon to compare floating-point values
+ val epsilon = 1e-6
+ avgOffsetsBehindLatestJSON.toDouble shouldBe
avgOffsetsBehindLatestExpected +- epsilon
+
+ // Validating that the pretty JSON of metrics reported is same as defined
+ progressPretty shouldBe
+ s"""
+ |{
+ | "id" : "${testProgress5.id.toString}",
+ | "runId" : "${testProgress5.runId.toString}",
+ | "name" : "KafkaMetricsTest",
+ | "timestamp" : "2025-09-23T06:00:00.000Z",
+ | "batchId" : 1,
+ | "batchDuration" : 100,
+ | "numInputRows" : 800000,
+ | "inputRowsPerSecond" : 78886.1,
+ | "processedRowsPerSecond" : 41622.0,
+ | "durationMs" : {
+ | "total" : 100
+ | },
+ | "stateOperators" : [ ],
+ | "sources" : [ {
Review Comment:
Is there a consistent way to trigger scientific notation in Kafka data
source test? I understand this is the simplest way to test this, but we'd like
to make sure it works with actual Kafka data source.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]