nsivabalan commented on issue #5569:
URL: https://github.com/apache/hudi/issues/5569#issuecomment-1126344542

   I could not able to reproduce atleast w/ local FS. 
   
   ```
   
   import org.apache.hudi.QuickstartUtils._
   import scala.collection.JavaConversions._
   import org.apache.spark.sql.SaveMode._
   import org.apache.hudi.DataSourceReadOptions._
   import org.apache.hudi.DataSourceWriteOptions._
   import org.apache.hudi.config.HoodieWriteConfig._
   
   val tableName = "hudi_trips_cow"
   val basePath = "file:///tmp/hudi_trips_cow"
   val dataGen = new DataGenerator
   
   // spark-shell
   val inserts = convertToStringList(dataGen.generateInserts(10))
   val df = spark.read.json(spark.sparkContext.parallelize(inserts, 2))
   
   import org.apache.spark.sql.functions.lit;
   
   
   
df.withColumn("ppath",lit("http://purl.obolibrary.org/obo/uberon.owl";)).write.format("hudi").
     options(getQuickstartWriteConfigs).
     option(PRECOMBINE_FIELD_OPT_KEY, "ts").
     option(RECORDKEY_FIELD_OPT_KEY, "uuid").
     option(PARTITIONPATH_FIELD_OPT_KEY, "ppath").
     option(TABLE_NAME, tableName).
     option(URL_ENCODE_PARTITIONING_OPT_KEY,"true").
     mode(Append).
     save(basePath)
   
   ```
   
   I tired above script for 0.8.0, 0.10.0 and 0.11.0 and its all same. infact, 
I same file group was updated for every commit which I tried w/ newer versions. 
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to