Hi Enric,

Seems that versions before 2.0-preview just handle the state I/O with the
> user thread?


That's true. It's a single-thread model.


Best,
Zakelly

On Tue, Nov 26, 2024 at 3:42 PM Enric Ott <243816...@qq.com> wrote:

> Thank you,Zakelly.
> Seems that versions before 2.0-preview just handle the state I/O with the
> user thread?My gosh.
>
>
> ------------------ 原始邮件 ------------------
> *发件人:* "Zakelly Lan" <zakelly....@gmail.com>;
> *发送时间:* 2024年11月26日(星期二) 中午11:57
> *收件人:* "user"<user@flink.apache.org>;
> *主题:* Re: How the Async Exuecution Model improved the throughtput
>
> Hi Enric,
>
> The asynchronous state processing prevents the task thread from blocking
> at the state I/O and instead allows it to perform CPU operations for
> another input record in the meantime. Additionally, state I/Os can run in
> parallel, reducing the total I/O time. Therefore, it is suitable for the
> following scenarios:
>
>    1. Heavy I/O, which is often the case when the state is large.
>    2. No hotspot key, allowing for greater parallelism in state access.
>
> This approach increases throughput by parallelizing the CPU operations and
> multiple state accesses, with better results than simply increasing task
> concurrency and lower cost. However, it is important to note that it cannot
> solve everything, particularly when the state is small or there is
> significant data skew.
>
>
> Best,
> Zakelly
>
> On Tue, Nov 26, 2024 at 11:12 AM Enric Ott <243816...@qq.com> wrote:
>
>> Hello,Community:
>>   I'm conducting experiments on flink-release-2.0-preview1,and I
>> got puzzled that How the *Async Exuecution Model* achieved significant
>> improvement on end to end throughput
>> in the scenarios of streaming state processing.The state access is
>> relatively light weighted(in my personal opinion),would the async accessing
>> just cracked the nut and solved everything ?
>>   Thanks.
>>
>

Reply via email to