Hi, David and Hang.
I think the `There are still 6 open issues in jdbc-3.3.0` is referred to the 
jira list that targeted to jdbc-3.3.0.
I’ve checked the list and found that FLINK-34961 
<https://issues.apache.org/jira/browse/FLINK-34961> was already resolved, 
FLINK-37305 <https://issues.apache.org/jira/browse/FLINK-37305> and FLINK-36812 
<https://issues.apache.org/jira/browse/FLINK-36812> are not blockers and can be 
moved to next version.
There are still some document work remaining for FLINK-35811 
<https://issues.apache.org/jira/browse/FLINK-35811> and FLINK-35363 
<https://issues.apache.org/jira/browse/FLINK-35363>, But considering that the 
Flink main repository does not always update the latest connector documentation 
(which I think we can supplement later), this should not be a blocker.
For FLINK-30371 <https://issues.apache.org/jira/browse/FLINK-30371> and 
FLINK-33761 <https://issues.apache.org/jira/browse/FLINK-33761> that you 
mentioned, they have been open for a long time, so I don’t think they are 
blocker too, but neither of these changes involves much content and many users 
have given feedback on this requirement, we should try our best to complete 
them. Afterwards, we can prepare for the release of 3.3.0. what do you think?

[1] 
https://issues.apache.org/jira/browse/FLINK-33463?jql=project%20%3D%20FLINK%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened)%20AND%20fixVersion%20%3D%20jdbc-3.3.0






> 2025年1月21日 00:54,David Radley <david_rad...@uk.ibm.com> 写道:
> 
> Hi Hang,
> I am also curious where we are with this. What are the 6 issues you are 
> concerned about?
> 
> I see –
> https://github.com/apache/flink-connector-jdbc/pull/151 - looks like it is a 
> trivial change that is stopping DB2 from working – waiting for the unit test. 
> It would be great if this could go in.
> https://github.com/apache/flink-connector-jdbc/pull/152 - seems to be making 
> changes ready for the new release.
> 
> It would be nice to have 
> https://github.com/apache/flink-connector-jdbc/pull/149 but the text says it 
> is not ready to be reviewed. So may miss this release.
> 
> I am also willing to work on PRs – if there is a committer willing to merge 
> them.
> 
>    Kind regards, David.
> 
> 
> 
> From: Xingcan Cui <xingc...@gmail.com>
> Date: Monday, 6 January 2025 at 22:53
> To: dev@flink.apache.org <dev@flink.apache.org>
> Cc: jerome.gagno...@gmail.com <jerome.gagno...@gmail.com>
> Subject: [EXTERNAL] Re: Plans for JDBC connector for 1.20?
> Hi Hang,
> 
> Just want to follow up on this. What's the current progress? Are there any
> unassigned tickets we can help with?
> 
> Best,
> Xingcan
> 
> On Thu, Dec 12, 2024 at 9:46 PM Hang Ruan <ruanhang1...@apache.org> wrote:
> 
>> Thanks, David & Peter.
>> 
>> I would love to be the RM for jdbc-3.3.0. And the jdbc-3.3.0 should support
>> Flink 1.19 and 1.20.
>> There are still 6 open issues in jdbc-3.3.0. I think we could prepare this
>> version after these issues are finished.
>> 
>> Best,
>> Hang
>> 
>> On Tue, Dec 10, 2024 at 8:06 PM David Radley <david_rad...@uk.ibm.com>
>> wrote:
>> 
>>> Hi Hang,
>>> 
>>> Thankyou very much - just to confirm,  are you willing to be the release
>>> manager for the JDBC connector Flink 1.20 compatible release? If so, when
>>> are you thinking of kicking this off.
>>> 
>>> 
>>> 
>>> Kind regards, David.
>>> 
>>> *From: *Peter Huang <huangzhenqiu0...@gmail.com>
>>> *Date: *Tuesday, 10 December 2024 at 07:11
>>> *To: *dev@flink.apache.org <dev@flink.apache.org>
>>> *Subject: *[EXTERNAL] Re: Plans for JDBC connector for 1.20?
>>> 
>>> Hi Yanquan and Ruan,
>>> 
>>> Echo to David's request. To support lineage integration in JDBC
>> connector,
>>> we need to drop the support of Flink 1.18 for back compatibility check as
>>> the recent change in Kafka Connector
>>> https://issues.apache.org/jira/browse/FLINK-35109.
>>> 
>>> 
>>> Best Regards
>>> Peter Huang
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On Mon, Dec 9, 2024 at 7:27 PM Hang Ruan <ruanhang1...@apache.org>
>> wrote:
>>> 
>>>> Hi, Yanquan & David.
>>>> 
>>>> I would like to help to release the version jdbc-3.3.0.
>>>> Thanks~
>>>> 
>>>> Best,
>>>> Hang
>>>> 
>>>> On Tue, Dec 3, 2024 at 10:42 PM David Radley <david_rad...@uk.ibm.com>
>>>> wrote:
>>>> 
>>>>> Hi Yanquan,
>>>>> Thanks for your support. Yes and support for the new dialects.
>>>>> 
>>>>> It would be great to get open PRs merged to support new dialect if
>>>>> possible, including:
>>>>> https://github.com/apache/flink-connector-jdbc/pull/118 - this has
>>> been
>>>>> around for a while with the submitter asking for a merge – the CI
>> tests
>>>> are
>>>>> currently failing – but if you can reassure him you could merge
>> after a
>>>>> rebase that would be fabulous.
>>>>> 
>>>>> and
>>>>> https://github.com/apache/flink-connector-jdbc/pull/149
>>>>> open lineage support would be great so we have more complete support
>> at
>>>>> Flink 1.20 and 2.
>>>>> 
>>>>> The next step is to agree a committer to be the release manager; are
>>> you
>>>>> interested in doing this?
>>>>>        Kind regards, David.
>>>>> 
>>>>> 
>>>>> From: Yanquan Lv <decq12y...@gmail.com>
>>>>> Date: Monday, 2 December 2024 at 01:57
>>>>> To: dev@flink.apache.org <dev@flink.apache.org>
>>>>> Subject: [EXTERNAL] Re: Plans for JDBC connector for 1.20?
>>>>> Hi, David.
>>>>> 
>>>>> +1 to cut main into a new 1.20 compatible release as
>>> flink-connector-jdbc
>>>>> for 1.20 is a demand frequently mentioned by users and is also
>>>> prioritized
>>>>> for adaptation to Flink 2.0.
>>>>> 
>>>>> I’ve checked the issues[1] that were solved in 3.3, most of them are
>>>>> introduced by FLIP-377: Support fine-grained configuration to control
>>>>> filter push down for Table/SQL Sources and FLIP-449: Reorganization
>> of
>>>>> flink-connector-jdbc, and only two bugfixs[4] that related jdbc-3.2.0
>>>>> product codes, which I don't think is a blocking that must create a
>>>> release.
>>>>> 
>>>>> [1]
>>>>> 
>>>> 
>>> 
>> https://issues.apache.org/jira/browse/FLINK-33460?jql=project%20%3D%20FLINK%20AND%20status%20%3D%20Resolved%20AND%20fixVersion%20%3D%20jdbc-3.3.0
>>>>> [2]
>>>>> 
>>>> 
>>> 
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=276105768
>>>>> [3]
>>>>> 
>>>> 
>>> 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-449%3A+Reorganization+of+flink-connector-jdbc
>>>>> [4]
>>>>> 
>>>> 
>>> 
>> https://issues.apache.org/jira/browse/FLINK-35542?jql=project%20%3D%20FLINK%20AND%20issuetype%20%3D%20Bug%20AND%20status%20%3D%20Resolved%20AND%20fixVersion%20%3D%20jdbc-3.3.0
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>>> 2024年11月22日 01:15,David Radley <david_rad...@uk.ibm.com> 写道:
>>>>>> 
>>>>>> Hi,
>>>>>> Is there a plan to release the JDBC for Flink 1.20? Apologies if
>> this
>>>> is
>>>>> already in hand, I could not find anything:
>>>>>> 
>>>>>> *   I notice that the last JDBC connector content corresponding to
>>>>> Flink 1.19 contained minimal content.
>>>>>> *   The main branch – now has a pom with 1.20 and contains a
>> massive
>>>>> amount of content compared to 1.19, which has minimal cherry picked
>>> fixes
>>>>> in.
>>>>>> *   So the worry is that we have a large amount of unshipped code
>> in
>>>>> main, amplified by the fact that there is introduces support for many
>>> new
>>>>> databases.
>>>>>> *   The latest code appears to have open lineage support with
>>>>> https://github.com/apache/flink-connector-jdbc/pull/137 so I assume
>>> the
>>>>> main code is now not compatible with Flink 1.19
>>>>>> 
>>>>>> 
>>>>>> Thinking about how to move this forward. Some options:
>>>>>> 
>>>>>> 1.  I am wondering is there an appetite to cut main into a new
>> 1.20
>>>>> compatible release pretty much as-is
>>>>>> 2.  We could do a 1.19 and 1.20 compatible release like the Kafka
>>>>> connector 3.3 and a 1.20 open lineage release like the Kafka
>> connector
>>>> 3.4,
>>>>> with minimal critical backported fixes.
>>>>>> 
>>>>>> I am keen to do option 1), as the main code will otherwise continue
>>> to
>>>>> diverse from what we release. I wonder what manual per database
>> testing
>>>>> would be required; to reassure us all the new database support
>> actually
>>>>> works. Or is the existing automated testing enough?
>>>>>> 
>>>>>> If we get some consensus on approach, I can help drive this, as
>> this
>>> is
>>>>> going to be an inhibitor for many of us in the community to migrate
>> to
>>>>> Flink 1.20 and subsequently to Flink v2.
>>>>>> 
>>>>>> Kind regards,      David.
>>>>>> 
>>>>>> Unless otherwise stated above:
>>>>>> 
>>>>>> IBM United Kingdom Limited
>>>>>> Registered in England and Wales with number 741598
>>>>>> Registered office: Building C, IBM Hursley Office, Hursley Park
>> Road,
>>>>> Winchester, Hampshire SO21 2JN
>>>>> 
>>>>> Unless otherwise stated above:
>>>>> 
>>>>> IBM United Kingdom Limited
>>>>> Registered in England and Wales with number 741598
>>>>> Registered office: Building C, IBM Hursley Office, Hursley Park Road,
>>>>> Winchester, Hampshire SO21 2JN
>>>>> 
>>>> 
>>> Unless otherwise stated above:
>>> 
>>> IBM United Kingdom Limited
>>> Registered in England and Wales with number 741598
>>> Registered office: Building C, IBM Hursley Office, Hursley Park Road,
>>> Winchester, Hampshire SO21 2JN
>>> 
>> 
> 
> Unless otherwise stated above:
> 
> IBM United Kingdom Limited
> Registered in England and Wales with number 741598
> Registered office: Building C, IBM Hursley Office, Hursley Park Road, 
> Winchester, Hampshire SO21 2JN

Reply via email to