Thank you,Zakelly.
Seems that versions before 2.0-preview just handle the state I/O with the user 
thread?My gosh.




------------------ ???????? ------------------
??????:                                                                         
                                               "Zakelly Lan"                    
                                                                
<zakelly....@gmail.com&gt;;
????????:&nbsp;2024??11??26??(??????) ????11:57
??????:&nbsp;"user"<user@flink.apache.org&gt;;

????:&nbsp;Re: How the Async Exuecution Model improved the throughtput



Hi Enric,

The asynchronous state processing prevents the task thread from blocking at the 
state I/O and instead allows it to perform CPU operations for another input 
record in the meantime. Additionally, state I/Os can run in parallel, reducing 
the total I/O time. Therefore, it is suitable for the following scenarios:

Heavy I/O, which is often the case when the state is large.

No hotspot key, allowing for greater parallelism in state access.
This approach increases throughput by parallelizing the CPU operations and 
multiple state accesses, with better results than simply increasing task 
concurrency and lower cost. However, it is important to note that it cannot 
solve everything, particularly when the state is small or there is significant 
data skew.


Best,
Zakelly

On Tue, Nov 26, 2024 at 11:12?6?2AM Enric Ott <243816...@qq.com&gt; wrote:

Hello,Community:
&nbsp;&nbsp;I'm conducting experiments on flink-release-2.0-preview1,and I 
got&nbsp;puzzled that How the Async Exuecution Model achieved significant 
improvement on end to end throughput&nbsp;
in the scenarios of streaming state processing.The state access is relatively 
light weighted(in my personal opinion),would the async accessing 
just&nbsp;cracked the nut and&nbsp;solved everything ?
&nbsp; Thanks.

Reply via email to