[ 
https://issues.apache.org/jira/browse/FLINK-18235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687936#comment-17687936
 ] 

Piotr Nowojski commented on FLINK-18235:
----------------------------------------

Thanks for the explanation [~dian.fu]. I haven't thought about this potential 
problem. Indeed that would complicate things, but maybe not by a lot? 
Basically, the Python Operator would have to always wait until receives a 
flags/markers "started emitting results for record N" and "finished emitting 
results for record N", tracking whether we are in a middle of emitting results 
from flat map. When checkpointing during emitting results for a single input 
record, we would have to wait for those batch of records to be fully 
processed/emitted.

Or am I missing something?

> Improve the checkpoint strategy for Python UDF execution
> --------------------------------------------------------
>
>                 Key: FLINK-18235
>                 URL: https://issues.apache.org/jira/browse/FLINK-18235
>             Project: Flink
>          Issue Type: Improvement
>          Components: API / Python
>            Reporter: Dian Fu
>            Priority: Not a Priority
>              Labels: auto-deprioritized-major, stale-assigned
>
> Currently, when a checkpoint is triggered for the Python operator, all the 
> data buffered will be flushed to the Python worker to be processed. This will 
> increase the overall checkpoint time in case there are a lot of elements 
> buffered and Python UDF is slow. We should improve the checkpoint strategy to 
> improve this. One way to implement this is to control the number of data 
> buffered in the pipeline between Java/Python processes, similar to what 
> [FLIP-183|https://cwiki.apache.org/confluence/display/FLINK/FLIP-183%3A+Dynamic+buffer+size+adjustment]
>  does to control the number of data buffered in the network. We can also let 
> users to config the checkpoint strategy if needed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to