zecookiez commented on code in PR #49816:
URL: https://github.com/apache/spark/pull/49816#discussion_r1950049496


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##########
@@ -2251,6 +2251,22 @@ object SQLConf {
       .booleanConf
       .createWithDefault(true)
 
+  val STATE_STORE_PARTITION_METRICS_REPORT_LIMIT =
+    buildConf("spark.sql.streaming.stateStore.numPartitionMetricsToReport")
+      .internal()
+      .doc(
+        "Maximum number of partition-level metrics to include in state store 
progress " +
+          "reporting. The default limit is 20% of the number of cores (with a 
minimum of 1 " +
+          "partition) and with a cap of 10. This limits the metrics to the N 
partitions with " +
+          "the smallest values to prevent the progress report from becoming 
too large."
+      )
+      .version("4.0.0")
+      .intConf
+      .checkValue(k => k >= 0, "Must be greater than or equal to 0")
+      .createWithDefault(
+        Math.min(10, Math.min(1, 
SHUFFLE_PARTITIONS.defaultValue.getOrElse(200) / 5))

Review Comment:
   I've marked the config as optional and moved the default logic to 
[here](https://github.com/apache/spark/pull/49816/files#diff-13c5b65678b327277c68d17910ae93629801af00117a0e3da007afd95b6c6764R5749)
 to prevent manual overrides from getting ignored 🙏 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to