[jira] [Created] (FLINK-36560) Fix the issue of timestamp_ltz increasing by 8 hours in Paimon
LvYanquan created FLINK-36560: - Summary: Fix the issue of timestamp_ltz increasing by 8 hours in Paimon Key: FLINK-36560 URL: https://issues.apache.org/jira/browse/FLINK-36560 Project: Flink Issue Type: Improvement Components: Flink CDC Affects Versions: cdc-3.2.1 Reporter: LvYanquan Fix For: cdc-3.2.1 Attachments: image-2024-10-17-17-16-01-773.png When synchronizing the timestamp field type in MySQL, it was found that the time displayed in the Paimon table was incorrect. How to reproduct: {code:java} CREATE TABLE `orders` ( order_id bigint not null primary key, user_id varchar(50) not null, shop_id bigint not null, product_id bigint not null, buy_fee bigint not null, create_time timestamp not null, update_time timestamp not null default now(), state int not null ); INSERT INTO orders VALUES (11, 'user_001', 12345, 1, 5000, '2023-02-15 16:40:56', '2023-02-15 18:42:56', 1), (12, 'user_002', 12346, 2, 4000, '2023-02-15 15:40:56', '2023-02-15 18:42:56', 1), (13, 'user_003', 12347, 3, 3000, '2023-02-15 14:40:56', '2023-02-15 18:42:56', 1), (14, 'user_001', 12347, 4, 2000, '2023-02-15 13:40:56', '2023-02-15 18:42:56', 1), (15, 'user_002', 12348, 5, 1000, '2023-02-15 12:40:56', '2023-02-15 18:42:56', 1), (16, 'user_001', 12348, 1, 1000, '2023-02-15 11:40:56', '2023-02-15 18:42:56', 1), (17, 'user_003', 12347, 4, 2000, '2023-02-15 10:40:56', '2023-02-15 18:42:56', 1);{code} My yaml job is like following: source: type: mysql hostname: host port: 3306 username: flink password: xx tables: yaml_test.\.* server-id: 22600-22620 sink: type: paimon catalog.properties.metastore: filesystem catalog.properties.warehouse: xx catalog.properties.fs.oss.endpoint: xx catalog.properties.fs.oss.accessKeyId:xx catalog.properties.fs.oss.accessKeySecret: xx pipeline: name: MySQL Database to Paimon Database Currently, the result is like following: the `create_time` and `update_time` fields are no correct. ||order_id||user_id||shop_id||product_id||buy_fee||create_time||update_time||state|| |100,001|user_001|12,345|1|5,000|2023-02-16 00:40:56|2023-02-16 02:42:56|1| |100,002|user_002|12,346|2|4,000|2023-02-15 23:40:56|2023-02-16 02:42:56|1| |100,003|user_003|12,347|3|3,000|2023-02-15 22:40:56|2023-02-16 02:42:56|1| |100,004|user_001|12,347|4|2,000|2023-02-15 21:40:56|2023-02-16 02:42:56|1| |100,005|user_002|12,348|5|1,000|2023-02-15 20:40:56|2023-02-16 02:42:56|1| |100,006|user_001|12,348|1|1,000|2023-02-15 19:40:56|2023-02-16 02:42:56|1| |100,007|user_003|12,347|4|2,000|2023-02-15 18:40:56|2023-02-16 02:42:56|1| -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [VOTE] Release 2.0-preview1, release candidate #1
+1 (binding) - Build and compile the source code locally: *OK* - Verified signatures and hashes: *OK* - Reviewed the website release PR: *OK* - Started local cluster and verified web ui & logs: *OK* Bes, Jark On Thu, 17 Oct 2024 at 13:50, Jingsong Li wrote: > +1 (binding) > > - Downloaded artifacts from dist > - Verified SHA512 checksum > - Verified GPG signature > - Build the source with java-11 > > Best, > Jingsong > > On Thu, Oct 17, 2024 at 11:53 AM Yunfeng Zhou > wrote: > > > > +1 (non-binding) > > > > - Verified checksums > > - Built from source > > - Reviewed release notes > > - Ran WordCount example and it works as expected > > > > Best, > > Yunfeng > > > > > > > 2024年10月13日 00:46,Xintong Song 写道: > > > > > > Hi everyone, > > > > > > Please review and vote on the release candidate #1 for the version > > > 2.0-preview1, as follows: > > > > > > [ ] +1, Approve the release > > > > > > [ ] -1, Do not approve the release (please provide specific comments) > > > > > > The complete staging area is available for your review, which includes: > > > * JIRA release notes [1] > > > * the official Apache source release and binary convenience releases > to be > > > deployed to dist.apache.org [2] (PyFlink artifacts are excluded > because > > > PyPI does not support preview versions), which are signed with the key > with > > > fingerprint 8D56AE6E7082699A4870750EA4E8C4C05EE6861F [3], > > > * all artifacts to be deployed to the Maven Central Repository [4], > > > * source code tag "release-2.0-preview1-rc1" [5], > > > * website pull request listing the new release and adding announcement > blog > > > post [6]. > > > > > > *Please note that Flink 2.0-preview-1 is not a stable version and > should > > > not be used in production environments. Therefore, functionality tests > > > should not be the focus of verifications for this release candidate.* > > > > > > The vote will be open for at least 72 hours. It is adopted by majority > > > approval, with at least 3 PMC affirmative votes. > > > > > > Best, > > > > > > Xintong > > > > > > > > > [1] > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12355070 > > > > > > [2] > https://dist.apache.org/repos/dist/dev/flink/flink-2.0-preview1-rc1/ > > > > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS > > > > > > [4] > https://repository.apache.org/content/repositories/orgapacheflink-1761/ > > > > > > [5] > https://github.com/apache/flink/releases/tag/release-2.0-preview1-rc1 > > > > > > [6] https://github.com/apache/flink-web/pull/754 > > >
[jira] [Created] (FLINK-36559) [docs]Add elasticsearch sink to docs
JunboWang created FLINK-36559: - Summary: [docs]Add elasticsearch sink to docs Key: FLINK-36559 URL: https://issues.apache.org/jira/browse/FLINK-36559 Project: Flink Issue Type: Improvement Components: Documentation Reporter: JunboWang -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [VOTE] FLIP-349: Move RocksDB statebackend classes to o.a.f.state.rocksdb package
Also +1 from side, I'm closing this vote, thanks all! Best, Zakelly On Tue, Oct 8, 2024 at 12:00 PM Rui Fan <1996fan...@gmail.com> wrote: > +1 (binding) > > Best, > Rui > > On Tue, Oct 8, 2024 at 12:28 AM Gabor Somogyi > wrote: > > > +1 (binding) > > > > G > > > > > > On Mon, Oct 7, 2024 at 6:20 PM Zakelly Lan > wrote: > > > > > Hi everyone, > > > > > > I'd like to start a vote on FLIP-349: Move RocksDB statebackend classes > > to > > > o.a.f.state.rocksdb package [1]. The discussion can be found here [2]. > > > > > > The vote will be open for at least 72 hours unless there are any > > objections > > > or insufficient votes. > > > > > > > > > [1] https://cwiki.apache.org/confluence/x/-Y6zDw > > > [2] https://lists.apache.org/thread/scc5cp5zythnh8r1nqvs8q4040m7jnlb > > > > > > Best, > > > Zakelly > > > > > >
Re: [VOTE] Release 2.0-preview1, release candidate #1
+1 (non-binding) - Verified checksum - Reviewed the website release PR - Built from source - Test streaming sql job with Nexmark Q20 Best, Zakelly On Thu, Oct 17, 2024 at 3:40 PM Jark Wu wrote: > +1 (binding) > > - Build and compile the source code locally: *OK* > - Verified signatures and hashes: *OK* > - Reviewed the website release PR: *OK* > - Started local cluster and verified web ui & logs: *OK* > > Bes, > Jark > > On Thu, 17 Oct 2024 at 13:50, Jingsong Li wrote: > > > +1 (binding) > > > > - Downloaded artifacts from dist > > - Verified SHA512 checksum > > - Verified GPG signature > > - Build the source with java-11 > > > > Best, > > Jingsong > > > > On Thu, Oct 17, 2024 at 11:53 AM Yunfeng Zhou > > wrote: > > > > > > +1 (non-binding) > > > > > > - Verified checksums > > > - Built from source > > > - Reviewed release notes > > > - Ran WordCount example and it works as expected > > > > > > Best, > > > Yunfeng > > > > > > > > > > 2024年10月13日 00:46,Xintong Song 写道: > > > > > > > > Hi everyone, > > > > > > > > Please review and vote on the release candidate #1 for the version > > > > 2.0-preview1, as follows: > > > > > > > > [ ] +1, Approve the release > > > > > > > > [ ] -1, Do not approve the release (please provide specific comments) > > > > > > > > The complete staging area is available for your review, which > includes: > > > > * JIRA release notes [1] > > > > * the official Apache source release and binary convenience releases > > to be > > > > deployed to dist.apache.org [2] (PyFlink artifacts are excluded > > because > > > > PyPI does not support preview versions), which are signed with the > key > > with > > > > fingerprint 8D56AE6E7082699A4870750EA4E8C4C05EE6861F [3], > > > > * all artifacts to be deployed to the Maven Central Repository [4], > > > > * source code tag "release-2.0-preview1-rc1" [5], > > > > * website pull request listing the new release and adding > announcement > > blog > > > > post [6]. > > > > > > > > *Please note that Flink 2.0-preview-1 is not a stable version and > > should > > > > not be used in production environments. Therefore, functionality > tests > > > > should not be the focus of verifications for this release candidate.* > > > > > > > > The vote will be open for at least 72 hours. It is adopted by > majority > > > > approval, with at least 3 PMC affirmative votes. > > > > > > > > Best, > > > > > > > > Xintong > > > > > > > > > > > > [1] > > > > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12355070 > > > > > > > > [2] > > https://dist.apache.org/repos/dist/dev/flink/flink-2.0-preview1-rc1/ > > > > > > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS > > > > > > > > [4] > > https://repository.apache.org/content/repositories/orgapacheflink-1761/ > > > > > > > > [5] > > https://github.com/apache/flink/releases/tag/release-2.0-preview1-rc1 > > > > > > > > [6] https://github.com/apache/flink-web/pull/754 > > > > > >
[RESULT][VOTE] FLIP-349: Move RocksDB statebackend classes to o.a.f.state.rocksdb package
Hi devs, I'm happy to announce that FLIP-349: Move RocksDB statebackend classes to o.a.f.state.rocksdb package[1] has been accepted with 4 approving votes, all of which are binding [2]: - Gabor Somogyi - Yuan Mei - Rui Fan - Zakelly Lan And there is no disapproving one. Thanks to all participants for discussion and voting. [1] https://cwiki.apache.org/confluence/x/-Y6zDw [2] https://lists.apache.org/thread/35hmp14n5ngbg2k0hvcfgvf1ky8o1dy4 Best, Zakelly
Re: [VOTE] Apache Flink Kubernetes Operator Release 1.10.0, release candidate #1
+1 (non binding) I've verified: - The src tarball builds and passes mvn verify (with Java 17), - the sha512 signatures for both srcs and helm - The GPG - Checked all poms are version 1.10.0 - Checked all poms do not contain -SNAPSHOT dependencies - Verified that the chart and appVersions from the helm tarball are 1.10.0 - Verified the chart points at the appropriate image - Verified that the the RC repo works as a helm Repo On Wed, 16 Oct 2024 at 05:18, Őrhidi Mátyás wrote: > Hi everyone, > > Please review and vote on the release candidate #1 for the version 1.10.0 > of Apache Flink Kubernetes Operator, > as follows: > [ ] +1, Approve the release > [ ] -1, Do not approve the release (please provide specific comments) > > **Release Overview** > > As an overview, the release consists of the following: > a) Kubernetes Operator canonical source distribution (including the > Dockerfile), to be deployed to the release repository at dist.apache.org > b) Kubernetes Operator Helm Chart to be deployed to the release repository > at dist.apache.org > c) Maven artifacts to be deployed to the Maven Central Repository > d) Docker image to be pushed to dockerhub > > **Staging Areas to Review** > > The staging areas containing the above mentioned artifacts are as follows, > for your review: > * All artifacts for a,b) can be found in the corresponding dev repository > at dist.apache.org [1] > * All artifacts for c) can be found at the Apache Nexus Repository [2] > * The docker image for d) is staged on github [3] > > All artifacts are signed with the key 48E78F054AA33CB5 [4] > > Other links for your review: > * JIRA release notes [5] > * source code tag "release-1.1.0-rc1" [6] > * PR to update the website Downloads page to include Kubernetes Operator > links [7] > > **Vote Duration** > > The voting time will run for at least 72 hours. > It is adopted by majority approval, with at least 3 PMC affirmative votes. > > **Note on Verification** > > You can follow the basic verification guide here[8]. > Note that you don't need to verify everything yourself, but please make > note of what you have tested together with your +- vote. > > Thanks, > Matyas > > [1] > > https://dist.apache.org/repos/dist/dev/flink/flink-kubernetes-operator-1.10.0-rc1/ > [2] https://repository.apache.org/content/repositories/orgapacheflink-1762 > [3] ghcr.io/apache/flink-kubernetes-operator:c703255 > [4]https://dist.apache.org/repos/dist/release/flink/KEYS > [5] > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354833 > [6] > > https://github.com/apache/flink-kubernetes-operator/releases/tag/release-1.10.0-rc1 > [7] https://github.com/apache/flink-web/pull/758 > [8] > > https://cwiki.apache.org/confluence/display/FLINK/Verifying+a+Flink+Kubernetes+Operator+Release > > > > > > > > ghcr.io/apache/flink-kubernetes-operator:c703255 >
[jira] [Created] (FLINK-36563) Running CI in random timezone to expose more time related bugs
LvYanquan created FLINK-36563: - Summary: Running CI in random timezone to expose more time related bugs Key: FLINK-36563 URL: https://issues.apache.org/jira/browse/FLINK-36563 Project: Flink Issue Type: Improvement Components: Flink CDC Affects Versions: cdc-3.3.0 Reporter: LvYanquan Fix For: cdc-3.3.0 Refer to [this comment|https://github.com/apache/flink-cdc/pull/3648#pullrequestreview-2374769012], when running CI, setting random time zone in session can help to expose issues that is related to time zone in advance. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-36565) Pipeline YAML should allow merging decimal with different precisions
yux created FLINK-36565: --- Summary: Pipeline YAML should allow merging decimal with different precisions Key: FLINK-36565 URL: https://issues.apache.org/jira/browse/FLINK-36565 Project: Flink Issue Type: Improvement Components: Flink CDC Reporter: yux Currently, it's not possible to merge two Decimal-typed fields with different precision or scaling. Since DECIMAL(p1, s1) and DECIMAL(p2, s2) could be converted to DECIMAL(MAX(p1 - s1, p2 - s2) + MAX(s1, s2), MAX(s1, s2) without any loss, this converting path seems reasonable and worth being added. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [VOTE] Release 2.0-preview1, release candidate #1
+1 (binding) - Verified checksum - Build PyFlink from source code (M2 Pro, JDK11, Python 3.10). - Run a PyFlink Table API and UDF example. Best, Xingbo Zakelly Lan 于2024年10月17日周四 16:14写道: > +1 (non-binding) > > - Verified checksum > - Reviewed the website release PR > - Built from source > - Test streaming sql job with Nexmark Q20 > > Best, > Zakelly > > On Thu, Oct 17, 2024 at 3:40 PM Jark Wu wrote: > > > +1 (binding) > > > > - Build and compile the source code locally: *OK* > > - Verified signatures and hashes: *OK* > > - Reviewed the website release PR: *OK* > > - Started local cluster and verified web ui & logs: *OK* > > > > Bes, > > Jark > > > > On Thu, 17 Oct 2024 at 13:50, Jingsong Li > wrote: > > > > > +1 (binding) > > > > > > - Downloaded artifacts from dist > > > - Verified SHA512 checksum > > > - Verified GPG signature > > > - Build the source with java-11 > > > > > > Best, > > > Jingsong > > > > > > On Thu, Oct 17, 2024 at 11:53 AM Yunfeng Zhou > > > wrote: > > > > > > > > +1 (non-binding) > > > > > > > > - Verified checksums > > > > - Built from source > > > > - Reviewed release notes > > > > - Ran WordCount example and it works as expected > > > > > > > > Best, > > > > Yunfeng > > > > > > > > > > > > > 2024年10月13日 00:46,Xintong Song 写道: > > > > > > > > > > Hi everyone, > > > > > > > > > > Please review and vote on the release candidate #1 for the version > > > > > 2.0-preview1, as follows: > > > > > > > > > > [ ] +1, Approve the release > > > > > > > > > > [ ] -1, Do not approve the release (please provide specific > comments) > > > > > > > > > > The complete staging area is available for your review, which > > includes: > > > > > * JIRA release notes [1] > > > > > * the official Apache source release and binary convenience > releases > > > to be > > > > > deployed to dist.apache.org [2] (PyFlink artifacts are excluded > > > because > > > > > PyPI does not support preview versions), which are signed with the > > key > > > with > > > > > fingerprint 8D56AE6E7082699A4870750EA4E8C4C05EE6861F [3], > > > > > * all artifacts to be deployed to the Maven Central Repository [4], > > > > > * source code tag "release-2.0-preview1-rc1" [5], > > > > > * website pull request listing the new release and adding > > announcement > > > blog > > > > > post [6]. > > > > > > > > > > *Please note that Flink 2.0-preview-1 is not a stable version and > > > should > > > > > not be used in production environments. Therefore, functionality > > tests > > > > > should not be the focus of verifications for this release > candidate.* > > > > > > > > > > The vote will be open for at least 72 hours. It is adopted by > > majority > > > > > approval, with at least 3 PMC affirmative votes. > > > > > > > > > > Best, > > > > > > > > > > Xintong > > > > > > > > > > > > > > > [1] > > > > > > > > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12355070 > > > > > > > > > > [2] > > > https://dist.apache.org/repos/dist/dev/flink/flink-2.0-preview1-rc1/ > > > > > > > > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS > > > > > > > > > > [4] > > > > https://repository.apache.org/content/repositories/orgapacheflink-1761/ > > > > > > > > > > [5] > > > https://github.com/apache/flink/releases/tag/release-2.0-preview1-rc1 > > > > > > > > > > [6] https://github.com/apache/flink-web/pull/754 > > > > > > > > > >
[jira] [Created] (FLINK-36566) Code optimization: always identify DataChangeEvent before SchemaChangeEvent in Operator
LvYanquan created FLINK-36566: - Summary: Code optimization: always identify DataChangeEvent before SchemaChangeEvent in Operator Key: FLINK-36566 URL: https://issues.apache.org/jira/browse/FLINK-36566 Project: Flink Issue Type: Improvement Components: Flink CDC Affects Versions: cdc-3.3.0 Reporter: LvYanquan Fix For: cdc-3.3.0 In a data flow system, the number of DataChangeEvents is always much larger than that of SchemaChangeEvents. If we always identify DataChangeEvents first (such as using an `instance of` judgment), it can reduce a lot of judgment logic and improve the performance. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-36562) Running CI in random timezone to expose more time related bugs
LvYanquan created FLINK-36562: - Summary: Running CI in random timezone to expose more time related bugs Key: FLINK-36562 URL: https://issues.apache.org/jira/browse/FLINK-36562 Project: Flink Issue Type: Improvement Components: Flink CDC Affects Versions: cdc-3.3.0 Reporter: LvYanquan Fix For: cdc-3.3.0 Refer to [this comment|https://github.com/apache/flink-cdc/pull/3648#pullrequestreview-2374769012], when running CI, setting random time zone in session can help to expose issues that is related to time zone in advance. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-36564) Running CI in random timezone to expose more time related bugs
LvYanquan created FLINK-36564: - Summary: Running CI in random timezone to expose more time related bugs Key: FLINK-36564 URL: https://issues.apache.org/jira/browse/FLINK-36564 Project: Flink Issue Type: Improvement Components: Flink CDC Affects Versions: cdc-3.3.0 Reporter: LvYanquan Fix For: cdc-3.3.0 Refer to [this comment|https://github.com/apache/flink-cdc/pull/3648#pullrequestreview-2374769012], when running CI, setting random time zone in session can help to expose issues that is related to time zone in advance. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-36567) Planner module didn't use the setting from flink-conf.yaml
liting liu created FLINK-36567: -- Summary: Planner module didn't use the setting from flink-conf.yaml Key: FLINK-36567 URL: https://issues.apache.org/jira/browse/FLINK-36567 Project: Flink Issue Type: Bug Components: Table SQL / Planner Affects Versions: 1.18.1 Reporter: liting liu I found the flink-table-planner_*.jar was generated in the /tmp dir, event though the conf `io.tmp.dirs` has been set to `/opt`. See jobmanger's log: 2024-10-17 08:52:30,330 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: io.tmp.dirs, /opt The code related should be ``` org.apache.flink.table.planner.loader.PlannerModule#PlannerModule private PlannerModule() { try { final ClassLoader flinkClassLoader = PlannerModule.class.getClassLoader(); final Path tmpDirectory = Paths.get(ConfigurationUtils.parseTempDirectories(new Configuration())[0]); ``` The PlannerModule creates a new Configuration instead of using the values from the configFile. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[ANNOUNCE] Apache flink-connector-kafka 3.3.0 released
The Apache Flink community is very happy to announce the release of Apache flink-connector-kafka 3.3.0. Apache Flink® is an open-source stream processing framework for distributed, high-performing, always-available, and accurate data streaming applications. The release is available for download at: https://flink.apache.org/downloads.html The full release notes are available in Jira: https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354606 We would like to thank all contributors of the Apache Flink community who made this release possible! Regards, Release Manager
[jira] [Created] (FLINK-36561) ResultSet.wasNull() does not reflect null values in Flink JDBC Driver
Ilya Soin created FLINK-36561: - Summary: ResultSet.wasNull() does not reflect null values in Flink JDBC Driver Key: FLINK-36561 URL: https://issues.apache.org/jira/browse/FLINK-36561 Project: Flink Issue Type: Bug Components: Table SQL / JDBC Affects Versions: 1.19.1, 1.20.0, 1.18.1 Reporter: Ilya Soin As per JDBC [standard|https://docs.oracle.com/en/java/javase/17/docs/api/java.sql/java/sql/ResultSet.html#wasNull()], {{ResultSet.wasNull()}} {quote}Reports whether the last column read had a value of SQL NULL. Note that you must first call one of the getter methods on a column to try to read its value and then call the method wasNull to see if the value read was SQL NULL. {quote} However, Flink JDBC driver currently does not update the {{wasNull}} flag within the {{FlinkResultSet.get*()}} methods. Instead, it only sets this flag during [iteration over rows|https://github.com/apache/flink/blob/release-2.0-preview1-rc1/flink-table/flink-sql-jdbc-driver/src/main/java/org/apache/flink/table/jdbc/FlinkResultSet.java#L106] fetched from the gateway endpoint. This behavior leads to {{wasNull}} returning true only if the entire row is null, not when individual column values are null. Consequently, reading a null value using {{FlinkResultSet.get*()}} incorrectly results in {{wasNull()}} returning false, which is not compliant with the JDBC specification. h4. Proposed solution Check if the underlying value accessed with {{FlinkResultSet.get*()}} method is null, and update wasNull accordingly. h4. For discussion Can we skip null rows in FlinkResultSet.next()? h4. Steps to reproduce: Add {code:java} assertTrue(resultSet.wasNull()); {code} after any call to resultSet.get*() in [testStringResultSetNullData()|https://github.com/apache/flink/blob/release-2.0-preview1-rc1/flink-table/flink-sql-jdbc-driver/src/test/java/org/apache/flink/table/jdbc/FlinkResultSetTest.java#L115]. Run the test and see the failed check. -- This message was sent by Atlassian Jira (v8.20.10#820010)