mccheah commented on a change in pull request #7: Allow custom hadoop 
properties to be loaded in the Spark data source
URL: https://github.com/apache/incubator-iceberg/pull/7#discussion_r236417449
 
 

 ##########
 File path: 
spark/src/main/java/com/netflix/iceberg/spark/source/IcebergSource.java
 ##########
 @@ -109,10 +113,19 @@ protected SparkSession lazySparkSession() {
     return lazySpark;
   }
 
-  protected Configuration lazyConf() {
+  protected Configuration lazyBaseConf() {
     if (lazyConf == null) {
       this.lazyConf = lazySparkSession().sparkContext().hadoopConfiguration();
     }
     return lazyConf;
   }
+
+  protected Configuration mergeIcebergHadoopConfs(Configuration baseConf, 
Map<String, String> options) {
+    Configuration resolvedConf = new Configuration(baseConf);
+    options.keySet().stream()
+        .filter(key -> key.startsWith("iceberg.hadoop"))
 
 Review comment:
   You don't want to set the Hadoop properties directly in `DataSourceOptions`, 
I think. Setting `fs.myscheme.impl` in `DataSourceOptions`, for example, 
doesn't lend itself to clear context that it would be applied as a Hadoop 
configuration. Prefixing makes it clear what these options would be used for. 
But when we prefix we have to strip the `iceberg.hadoop` prefix when passing 
down to the `Configuration` object.
   
   This is a precedent set by Spark itself as well.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to