Thank you,Zakelly. Seems that versions before 2.0-preview just handle the state I/O with the user thread?My gosh.
------------------ ???????? ------------------ ??????: "Zakelly Lan" <zakelly....@gmail.com>; ????????: 2024??11??26??(??????) ????11:57 ??????: "user"<user@flink.apache.org>; ????: Re: How the Async Exuecution Model improved the throughtput Hi Enric, The asynchronous state processing prevents the task thread from blocking at the state I/O and instead allows it to perform CPU operations for another input record in the meantime. Additionally, state I/Os can run in parallel, reducing the total I/O time. Therefore, it is suitable for the following scenarios: Heavy I/O, which is often the case when the state is large. No hotspot key, allowing for greater parallelism in state access. This approach increases throughput by parallelizing the CPU operations and multiple state accesses, with better results than simply increasing task concurrency and lower cost. However, it is important to note that it cannot solve everything, particularly when the state is small or there is significant data skew. Best, Zakelly On Tue, Nov 26, 2024 at 11:12?6?2AM Enric Ott <243816...@qq.com> wrote: Hello,Community: I'm conducting experiments on flink-release-2.0-preview1,and I got puzzled that How the Async Exuecution Model achieved significant improvement on end to end throughput in the scenarios of streaming state processing.The state access is relatively light weighted(in my personal opinion),would the async accessing just cracked the nut and solved everything ? Thanks.