flashJd commented on PR #9544:
URL: https://github.com/apache/hudi/pull/9544#issuecomment-1694925193

   > When insert bounded data with async compaction enabled in flink, 
compaction execution always terminated when closing the CompactOperator, due to 
the async execution in
   > 
   > 
https://github.com/apache/hudi/blob/281ef1a4a99e462b6b4f032b23f18f20a20510e5/hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/sink/compact/CompactOperator.java#L113-L119
   > 
   > leaving an uncompleted compaction with compaction.request and 
compaction.inflight files and this two files will be rollbacked in the next 
bounded data insert.
   > ### Change Logs
   > N/A
   > 
   > ### Impact
   > N/A
   > 
   > ### Risk level (write none, low medium or high below)
   > N/A
   > 
   > ### Documentation Update
   > N/A
   > 
   > * _The config description must be updated if new configs are added or the 
default value of the configs are changed_
   > * _Any new feature or user-facing change requires updating the Hudi 
website. Please create a Jira ticket, attach the
   >   ticket number here and follow the 
[instruction](https://hudi.apache.org/contribute/developer-setup#website) to 
make
   >   changes to the website._
   > 
   > ### Contributor's checklist
   > * [ ]  Read through [contributor's 
guide](https://hudi.apache.org/contribute/how-to-contribute)
   > * [ ]  Change Logs and Impact were stated clearly
   > * [ ]  Adequate tests were added if applicable
   > * [ ]  CI passed
   
   
   
   > For sql users, in `HoodieTableSink`, we explicitly set up the async 
compaction as false when the source input is bounded:
   > 
   > ```java
   >       if (OptionsResolver.needsAsyncCompaction(conf)) {
   >         // use synchronous compaction for bounded source.
   >         if (context.isBounded()) {
   >           conf.setBoolean(FlinkOptions.COMPACTION_ASYNC_ENABLED, false);
   >         }
   >         return Pipelines.compact(conf, pipeline);
   >       }
   > ```
   > 
   > So does the problem exist only for DataStream users?
   sql and datasteam scene both exists, `context.isBounded()` only enabled in 
batch mode, so this issue exists when using streaming mode,  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to