Hi!
Sorry for the late reply. Which Flink version are you using? For current
Flink master there is no JdbcTableSource.
Qihua Yang 于2022年1月19日周三 16:00写道:
> Should I change the query? something like below to add a limit? If no
> limit, does that mean flink will read whole huge table to memory and
Should I change the query? something like below to add a limit? If no
limit, does that mean flink will read whole huge table to memory and fetch
and return 20 records each time?
val query = String.format("SELECT * FROM %s limit 1000", tableName)
On Tue, Jan 18, 2022 at 11:56 PM Qihua Yang wrote
Hi Caizhi,
Thank you for your reply. The heap size is 512m. Fetching from the DB table
is the only costly operation. After fetching from DB, I simply ingested a
kafka topic. That should not be the bottleneck.
Here is the jdbc configuration. Is that correct config?
val query = String.format("SELEC
Hi!
This is not the desired behavior. As you have set fetchSize to 20 there
will be only 20 records in each parallelism of the source. How large is
your heap size? Does your job have any other operations which consume a lot
of heap memory?
Qihua Yang 于2022年1月19日周三 15:27写道:
> Here is the errors
Here is the errors
Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread "server-timer"
Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread "I/O dispatcher 16"
Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExcep