Yes. However, a huge parallelism would require additional coordination cost so 
you might need to set up the JobManager with a decent spec (at least 8C 16G by 
experience). Also, you'll need to make sure there's no external bottlenecks 
(e.g. reading/writing data from the external storage).

Best,
Zhanghao Chen
________________________________
From: Ganesh Walse <ganesh.wa...@gmail.com>
Sent: Friday, March 29, 2024 10:42
To: Zhanghao Chen <zhanghao.c...@outlook.com>
Cc: user@flink.apache.org <user@flink.apache.org>
Subject: Re: One query just for curiosity

You mean to say we can process 32767 records in parallel. And may I know if 
this is the case then do we need to do anything for this.

On Fri, 29 Mar 2024 at 8:08 AM, Zhanghao Chen 
<zhanghao.c...@outlook.com<mailto:zhanghao.c...@outlook.com>> wrote:
Flink can be scaled up to a parallelism of 32767 at max. And if your record 
processing is mostly IO-bound, you can further boost the throughput via 
Async-IO [1].

[1] 
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/operators/asyncio/

Best,
Zhanghao Chen
________________________________
From: Ganesh Walse <ganesh.wa...@gmail.com<mailto:ganesh.wa...@gmail.com>>
Sent: Friday, March 29, 2024 4:48
To: user@flink.apache.org<mailto:user@flink.apache.org> 
<user@flink.apache.org<mailto:user@flink.apache.org>>
Subject: One query just for curiosity

Hi Team,
If my 1 record gets processed in 1 second in a flink. Then what will be the 
best time taken to process 1000 records in flink using maximum parallelism.

Reply via email to