[ 
https://issues.apache.org/jira/browse/FLINK-37508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yanquan Lv updated FLINK-37508:
-------------------------------
    Fix Version/s: cdc-3.5.0
                       (was: cdc-3.4.0)

> Postgres CDC Jdbc query should use debezium.snapshot.fetch.size rather than  
> debezium.query.fetch.size.
> -------------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-37508
>                 URL: https://issues.apache.org/jira/browse/FLINK-37508
>             Project: Flink
>          Issue Type: Improvement
>          Components: Flink CDC
>    Affects Versions: cdc-3.3.0
>            Reporter: Hongshun Wang
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: cdc-3.5.0
>
>
> In debezium postgres connector, [{\{snapshot.fetch.size 
> }}|https://debezium.io/documentation//reference/2.7/connectors/postgresql.html#postgresql-property-snapshot-fetch-size]specifies
>  the maximum number of rows in a batch(defalut value is 10240).
> {{However, currently in pg cdc, use query.size(which is not a param of 
> debezium postgres connector, the defalut value is 0, meaning read a without a 
> fetch size). If the chunk size is huge, will OOM(directly)}}
> {code:java}
> PostgresQueryUtils.readTableSplitDataStatement(
>         jdbcConnection,
>         selectSql,
>         snapshotSplit.getSplitStart() == null,
>         snapshotSplit.getSplitEnd() == null,
>         snapshotSplit.getSplitStart(),
>         snapshotSplit.getSplitEnd(),
>         snapshotSplit.getSplitKeyType().getFieldCount(),
>         connectorConfig.getQueryFetchSize()); {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to