xccui commented on issue #5553:
URL: https://github.com/apache/hudi/issues/5553#issuecomment-1138062379

   > Have fired a fix for flink here: #5660
   > 
   > https://issues.apache.org/jira/browse/HUDI-3782 and 
https://issues.apache.org/jira/browse/HUDI-4138 may cause this bug.
   > 
   > The `HoodieTable#getMetadataWriter` is used by many async table service 
such as cleaning, compaction, clustering and so on, this method now would try 
to modify the table config each time it is called no matter whether metadata 
table is enabled/disabled.
   > 
   > In general, we should never make any side effect in the read code path of 
hoodie table config. And hoodie table metadata writer.
   > 
   > I'm not sure how to fix this on Spark side, have two ways to fix on my 
mind:
   > 
   > 1. make table config concurrency safe (not suggested because it is too 
heavy for a config)
   > 2. make sure the metadata cleaning only happens once for the whole Job 
lifetime (still risky because there may be multiple jobs, but with very small 
probability). I would suggest this way from my side.
   
   Hi @danny0405, we tested our job with this patch applied but still got the 
same exception.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to