Xtpacz commented on code in PR #52507:
URL: https://github.com/apache/spark/pull/52507#discussion_r2416535479


##########
core/src/main/scala/org/apache/spark/internal/config/package.scala:
##########
@@ -1240,6 +1240,13 @@ package object config {
     .booleanConf
     .createWithDefault(false)
 
+  private[spark] val FILES_RENAME_NUM_THREADS = 
ConfigBuilder("spark.files.rename.numThreads")

Review Comment:
   My understanding is to add a session-scoped SQLConf (e.g., 
spark.sql.files.rename.numThreads) and have SQL paths read it first, while 
retaining the global key (spark.files.rename.numThreads) as a fallback since 
core (e.g., HadoopMapReduceCommitProtocol) cannot depend on SQL. Please let me 
know if I’ve misunderstood. Thank you!
   



##########
core/src/main/scala/org/apache/spark/internal/config/package.scala:
##########
@@ -1240,6 +1240,13 @@ package object config {
     .booleanConf
     .createWithDefault(false)
 
+  private[spark] val FILES_RENAME_NUM_THREADS = 
ConfigBuilder("spark.files.rename.numThreads")

Review Comment:
   > can this be a dynamic session config in `SQLConf`?
   
   My understanding is to add a session-scoped SQLConf (e.g., 
spark.sql.files.rename.numThreads) and have SQL paths read it first, while 
retaining the global key (spark.files.rename.numThreads) as a fallback since 
core (e.g., HadoopMapReduceCommitProtocol) cannot depend on SQL. Please let me 
know if I’ve misunderstood. Thank you!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to