ad1happy2go commented on issue #9976:
URL: https://github.com/apache/hudi/issues/9976#issuecomment-1792587002

   @darlatrade As I see it is taking time in the "Doing partition and writing 
data", it probably mean your incremental may be touching lot of file groups so 
it had to rewrite lot of parquet files as it is COW table. Can you check how 
much data got written from this stage on Spark UI?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to