Hi!

This is not the desired behavior. As you have set fetchSize to 20 there
will be only 20 records in each parallelism of the source. How large is
your heap size? Does your job have any other operations which consume a lot
of heap memory?

Qihua Yang <yang...@gmail.com> 于2022年1月19日周三 15:27写道:

> Here is the errors
> Exception: java.lang.OutOfMemoryError thrown from the
> UncaughtExceptionHandler in thread "server-timer"
> Exception: java.lang.OutOfMemoryError thrown from the
> UncaughtExceptionHandler in thread "I/O dispatcher 16"
> Exception: java.lang.OutOfMemoryError thrown from the
> UncaughtExceptionHandler in thread "HTTP-Dispatcher"
> Exception: java.lang.OutOfMemoryError thrown from the
> UncaughtExceptionHandler in thread "I/O dispatcher 11"
> Exception: java.lang.OutOfMemoryError thrown from the
> UncaughtExceptionHandler in thread "I/O dispatcher 9"
>
> On Tue, Jan 18, 2022 at 11:25 PM Qihua Yang <yang...@gmail.com> wrote:
>
>> Hi,
>>
>> I have a flink cluster(50 hosts, each host runs a task manager).
>> I am using Flink JDBC to consume data from a database. The db table is
>> pretty large, around 187340000 rows. I configured the JDBC number of
>> partitions to 50. fetchSize is 20. Flink application has 50 task managers.
>> Anyone know why I got OutOfMemoryError? How should I config it?
>>
>> Thanks,
>> Qihua
>>
>>

Reply via email to