maheshguptags commented on issue #12988:
URL: https://github.com/apache/hudi/issues/12988#issuecomment-2739051363

   I haven't enabled any config I have just added below ddl with required 
params.
   ```
   CREATE TABLE IF NOT EXISTS hudi_temp(x STRING,_date STRING,_count 
BIGINT,type STRING,update_date TIMESTAMP(3)) PARTITIONED BY (`x`) 
   WITH ('connector' = 'hudi', 
'hoodie.datasource.write.recordkey.field'='x,_date',
   'path' = '${bucket_path_daily}','table.type' = 
'COPY_ON_WRITE','hoodie.datasource.write.precombine.field'='updated_date',
   'write.operation' = 
'delete','hoodie.datasource.write.partitionpath.field'='x',
   'hoodie.write.concurrency.mode'='optimistic_concurrency_control',
   
'hoodie.write.lock.provider'='org.apache.hudi.client.transaction.lock.InProcessLockProvider',
 'hoodie.cleaner.policy.failed.writes'='LAZY')");
   ```
   this what I am using for delete query.
   
   can you please explain this as well?
   
   > One more question about the ingestion job: Do we need to add the below 
config to the ingestion table as well? (I’m referring to both the ingestion 
Flink + Hudi stream and the deletion Flink + Hudi batch stream DDLs.)
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to