vinothchandar opened a new issue #3337:
URL: https://github.com/apache/hudi/issues/3337


   **Describe the problem you faced**
   
   Hi, everyone! We ingest data with options:
   ```
   
hoodie.datasource.write.keygenerator.class=org.apache.hudi.keygen.CustomKeyGenerator
   hoodie.deltastreamer.keygen.timebased.timestamp.type=DATE_STRING
   hoodie.deltastreamer.keygen.timebased.output.dateformat=yyyy/MM
   
hoodie.deltastreamer.keygen.timebased.input.dateformat=yyyy-MM-dd'T'HH:mm:ssZ,yyyy-MM-dd'T'HH:mm:ss.SSSZ
   hoodie.deltastreamer.keygen.timebased.input.dateformat.list.delimiter.regex=
   hoodie.deltastreamer.keygen.timebased.input.timezone='
   hoodie.datasource.write.partitionpath.field=time:TIMESTAMP
   ```
   
   Field time is in format 2021-05-16T21:36:39Z. We want for some table to have 
partitions by yyyy/MM, because they are small and there is no need in deep 
partitioning. But we have a problem with run_sync_tool.sh. What did we try:
   1. --partitioned-by time obviously didn’t help
   2. --partition-value-extractor 
org.apache.hudi.hive.MultiPartKeysValueExtractor 
   --partitioned-by _hoodie_partition_path Didn’t help much as well, we are 
getting an error in screenshoot (in parquet file _hoodie_partition_path=2021/05 
)
   Any ideas how to fix it?
   
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   1.
   2.
   3.
   4.
   
   **Expected behavior**
   
   A clear and concise description of what you expected to happen.
   
   **Environment Description**
   
   * Hudi version :
   
   * Spark version :
   
   * Hive version :
   
   * Hadoop version :
   
   * Storage (HDFS/S3/GCS..) :
   
   * Running on Docker? (yes/no) :
   
   
   **Additional context**
   
   Add any other context about the problem here.
   
   **Stacktrace**
   
   ```Add the stacktrace of the error.```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to