parthchandra commented on code in PR #2447:
URL: https://github.com/apache/datafusion-comet/pull/2447#discussion_r2392911698


##########
spark/src/main/scala/org/apache/spark/sql/comet/operators.scala:
##########
@@ -201,6 +207,30 @@ abstract class CometNativeExec extends CometExec {
         // TODO: support native metrics for all operators.
         val nativeMetrics = CometMetricNode.fromCometPlan(this)
 
+        // For each relation in a CometNativeScan generate a hadoopConf,
+        // for each file path in a relation associate with hadoopConf
+        val cometNativeScans: Seq[CometNativeScanExec] = this
+          .collectLeaves()
+          .filter(_.isInstanceOf[CometNativeScanExec])
+          .map(_.asInstanceOf[CometNativeScanExec])
+        val encryptedFilePaths = cometNativeScans.flatMap { scan =>
+          // This creates a hadoopConf that brings in any SQLConf 
"spark.hadoop.*" configs and
+          // per-relation configs since different tables might have different 
decryption
+          // properties.
+          val hadoopConf = scan.relation.sparkSession.sessionState
+            .newHadoopConfWithOptions(scan.relation.options)
+          val encryptionEnabled = 
CometParquetUtils.encryptionEnabled(hadoopConf)
+          if (encryptionEnabled) {
+            // hadoopConf isn't serializable, so we have to do a broadcasted 
config.
+            val broadcastedConf =
+              scan.relation.sparkSession.sparkContext

Review Comment:
   Why does this need to be broadcast? Won't each executor instance will have 
its own copy of the `scan.relation.options` already. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to