Re: [VOTE] Release 1.19.2, release candidate #1

2025-02-11 Thread Leonard Xu
+1 (binding)

- verified signatures
- verified hashsums 
- built from source code with JDK 1.8 succeeded
- checked Github release tag
- checked release notes
- reviewed the web PR 

Best,
Leonard

> 2025年2月11日 19:09,Márton Balassi  写道:
> 
> +1 (binding)
> 
> 1. Verified the archives, checksums, and signatures
> 2. Extracted and inspected the source code for binaries
> 3. Built the source code
> 4. Verified license files / headers
> 
> Best,
> Marton
> 
> On Mon, Feb 3, 2025 at 12:37 PM Maximilian Michels  wrote:
> 
>> +1 (binding)
>> 
>> 1. Verified the archives, checksums, and signatures
>> 2. Extracted and inspected the source code for binaries
>> 3. Built the source code
>> 4. Verified license files / headers
>> 
>> Sergey wrote:
>>> one minor finding:
>>> in staging repos [1] it has wrong description (1.9.2 instead of 1.19.2)
>> 
>> Good catch Sergey!
>> 
>> Alex wrote:
>>> I realized I made a typo after publishing the staging artifacts.
>>> Renaming is not allowed, unfortunately.
>> 
>> No worries, the description won't be relevant beyond the staging
>> phase. We use the direct link to verify, which you supplied. Besides,
>> Apache releases formally only require proper source code.
>> 
>> Thanks for doing the release!
>> 
>> -Max
>> 
>> On Mon, Feb 3, 2025 at 11:57 AM Alexander Fedulov
>>  wrote:
>>> 
 one minor finding:
 in staging repos [1] it has wrong description (1.9.2 instead of 1.19.2)
>>> 
>>> I realized I made a typo after publishing the staging artifacts.
>>> Renaming is not allowed, unfortunately.
>>> As far as I can tell, it is only relevant for the staging area itself,
>>> so, unless I am missing something, I believe it is not needed to start
>>> over just because of it.
>>> 
>>> Best,
>>> Ale
>>> 
>>> On Mon, 3 Feb 2025 at 00:37, Sergey Nuyanzin 
>> wrote:
 
 +1 (non-binfing)
 
 one minor finding:
 in staging repos [1] it has wrong description (1.9.2 instead of 1.19.2)
 
 
 - verified signatures && hashsums
 - checked git tag
 - checked no binaries in source files
 - built the source with jdk1.8 and maven 3.8.6
 
 [1] https://repository.apache.org/#stagingRepositories
 
 On Thu, Jan 30, 2025 at 5:45 PM Ferenc Csaky
>> 
 wrote:
 
> +1 (non-binfing)
> 
> - verified signatures
> - verified hashsums
> - checked GH release tag
> - checked no binaries in source archives
> - built the source with jdk8 and maven 3.8.6
> - reviewed web PR
> - deployed WordCount job to local cluster
> 
> Thanks,
> Ferenc
> 
> 
> 
> 
> On Thursday, January 30th, 2025 at 03:22, Yanquan Lv <
>> decq12y...@gmail.com>
> wrote:
> 
>> 
>> 
>> +1 (non-binding)
>> 
>> I checked:
>> - Review JIRA release notes
>> - Verify hashes and verify signatures
>> - Build success from source with JDK11 & maven3.8.6
>> - Source code artifacts matching the current release
>> - Read the announcement blog and LGTM
>> 
>> -- Forwarded message -
>> 发件人: Alexander Fedulov alexander.fedu...@gmail.com
>> 
>> Date: 2025年1月29日周三 上午1:30
>> Subject: [VOTE] Release 1.19.2, release candidate #1
>> To: dev dev@flink.apache.org
>> 
>> 
>> 
>> Hi everyone,
>> 
>> Please review and vote on the release candidate #1 for the version
>> 1.19.2, as follows:
>> [ ] +1, Approve the release
>> [ ] -1, Do not approve the release (please provide specific
>> comments)
>> 
>> The staging area contains the following artifacts:
>> * JIRA release notes [1],
>> * the official Apache source release and binary convenience
>> releases
>> to be deployed to dist.apache.org [2], which are signed with the
>> key
>> with fingerprint 8C1FC56D16B0029D [3],
>> * all artifacts to be deployed to the Maven Central Repository [4],
>> * source code tag "release-1.19.2-rc1" [5],
>> * website pull request listing the new release and adding
>> announcement
>> blog post [6].
>> 
>> The vote will be open for at least 72 hours. It is adopted by
>> majority
>> approval, with at least 3 PMC affirmative votes.
>> 
>> Verification instruction can be found here [7] . You’re not
>> required
>> to verify everything, but please mention what you have tested along
>> with your +/- vote.
>> 
>> Thanks,
>> Alex
>> 
>> [1]
>> 
> 
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354783
>> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.19.2-rc1/
>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>> [4]
>> 
> 
>> https://repository.apache.org/content/repositories/orgapacheflink-1782/org/apache/flink/
>> [5] https://github.com/apache/flink/tree/release-1.19.2-rc1
>> [6] https://github.com/apache/flink-web/pull/771
>> [7]
>> 
> 
>> https://cwiki.apach

Re: [VOTE] Release 1.19.2, release candidate #1

2025-02-11 Thread Márton Balassi
+1 (binding)

1. Verified the archives, checksums, and signatures
2. Extracted and inspected the source code for binaries
3. Built the source code
4. Verified license files / headers

Best,
Marton

On Mon, Feb 3, 2025 at 12:37 PM Maximilian Michels  wrote:

> +1 (binding)
>
> 1. Verified the archives, checksums, and signatures
> 2. Extracted and inspected the source code for binaries
> 3. Built the source code
> 4. Verified license files / headers
>
> Sergey wrote:
> >one minor finding:
> >in staging repos [1] it has wrong description (1.9.2 instead of 1.19.2)
>
> Good catch Sergey!
>
> Alex wrote:
> >I realized I made a typo after publishing the staging artifacts.
> >Renaming is not allowed, unfortunately.
>
> No worries, the description won't be relevant beyond the staging
> phase. We use the direct link to verify, which you supplied. Besides,
> Apache releases formally only require proper source code.
>
> Thanks for doing the release!
>
> -Max
>
> On Mon, Feb 3, 2025 at 11:57 AM Alexander Fedulov
>  wrote:
> >
> > > one minor finding:
> > > in staging repos [1] it has wrong description (1.9.2 instead of 1.19.2)
> >
> > I realized I made a typo after publishing the staging artifacts.
> > Renaming is not allowed, unfortunately.
> > As far as I can tell, it is only relevant for the staging area itself,
> > so, unless I am missing something, I believe it is not needed to start
> > over just because of it.
> >
> > Best,
> > Ale
> >
> > On Mon, 3 Feb 2025 at 00:37, Sergey Nuyanzin 
> wrote:
> > >
> > > +1 (non-binfing)
> > >
> > > one minor finding:
> > > in staging repos [1] it has wrong description (1.9.2 instead of 1.19.2)
> > >
> > >
> > > - verified signatures && hashsums
> > > - checked git tag
> > > - checked no binaries in source files
> > > - built the source with jdk1.8 and maven 3.8.6
> > >
> > > [1] https://repository.apache.org/#stagingRepositories
> > >
> > > On Thu, Jan 30, 2025 at 5:45 PM Ferenc Csaky
> 
> > > wrote:
> > >
> > > > +1 (non-binfing)
> > > >
> > > > - verified signatures
> > > > - verified hashsums
> > > > - checked GH release tag
> > > > - checked no binaries in source archives
> > > > - built the source with jdk8 and maven 3.8.6
> > > > - reviewed web PR
> > > > - deployed WordCount job to local cluster
> > > >
> > > > Thanks,
> > > > Ferenc
> > > >
> > > >
> > > >
> > > >
> > > > On Thursday, January 30th, 2025 at 03:22, Yanquan Lv <
> decq12y...@gmail.com>
> > > > wrote:
> > > >
> > > > >
> > > > >
> > > > > +1 (non-binding)
> > > > >
> > > > > I checked:
> > > > > - Review JIRA release notes
> > > > > - Verify hashes and verify signatures
> > > > > - Build success from source with JDK11 & maven3.8.6
> > > > > - Source code artifacts matching the current release
> > > > > - Read the announcement blog and LGTM
> > > > >
> > > > > -- Forwarded message -
> > > > > 发件人: Alexander Fedulov alexander.fedu...@gmail.com
> > > > >
> > > > > Date: 2025年1月29日周三 上午1:30
> > > > > Subject: [VOTE] Release 1.19.2, release candidate #1
> > > > > To: dev dev@flink.apache.org
> > > > >
> > > > >
> > > > >
> > > > > Hi everyone,
> > > > >
> > > > > Please review and vote on the release candidate #1 for the version
> > > > > 1.19.2, as follows:
> > > > > [ ] +1, Approve the release
> > > > > [ ] -1, Do not approve the release (please provide specific
> comments)
> > > > >
> > > > > The staging area contains the following artifacts:
> > > > > * JIRA release notes [1],
> > > > > * the official Apache source release and binary convenience
> releases
> > > > > to be deployed to dist.apache.org [2], which are signed with the
> key
> > > > > with fingerprint 8C1FC56D16B0029D [3],
> > > > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > > > * source code tag "release-1.19.2-rc1" [5],
> > > > > * website pull request listing the new release and adding
> announcement
> > > > > blog post [6].
> > > > >
> > > > > The vote will be open for at least 72 hours. It is adopted by
> majority
> > > > > approval, with at least 3 PMC affirmative votes.
> > > > >
> > > > > Verification instruction can be found here [7] . You’re not
> required
> > > > > to verify everything, but please mention what you have tested along
> > > > > with your +/- vote.
> > > > >
> > > > > Thanks,
> > > > > Alex
> > > > >
> > > > > [1]
> > > > >
> > > >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354783
> > > > > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.19.2-rc1/
> > > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > > > [4]
> > > > >
> > > >
> https://repository.apache.org/content/repositories/orgapacheflink-1782/org/apache/flink/
> > > > > [5] https://github.com/apache/flink/tree/release-1.19.2-rc1
> > > > > [6] https://github.com/apache/flink-web/pull/771
> > > > > [7]
> > > > >
> > > >
> https://cwiki.apache.org/confluence/display/FLINK/Verifying+a+Flink+Release
> > > >
> > >
> > >
> > > --
> > > Best regards,
> > > Se

Re: [VOTE] Release 1.19.2, release candidate #1

2025-02-11 Thread Robert Metzger
+1 (binding)

- maven staging repo contents look fine
- src archive looks fine
- local cluster starts on macos from binary release artifacts


On Tue, Feb 11, 2025 at 1:38 PM Leonard Xu  wrote:

> +1 (binding)
>
> - verified signatures
> - verified hashsums
> - built from source code with JDK 1.8 succeeded
> - checked Github release tag
> - checked release notes
> - reviewed the web PR
>
> Best,
> Leonard
>
> > 2025年2月11日 19:09,Márton Balassi  写道:
> >
> > +1 (binding)
> >
> > 1. Verified the archives, checksums, and signatures
> > 2. Extracted and inspected the source code for binaries
> > 3. Built the source code
> > 4. Verified license files / headers
> >
> > Best,
> > Marton
> >
> > On Mon, Feb 3, 2025 at 12:37 PM Maximilian Michels 
> wrote:
> >
> >> +1 (binding)
> >>
> >> 1. Verified the archives, checksums, and signatures
> >> 2. Extracted and inspected the source code for binaries
> >> 3. Built the source code
> >> 4. Verified license files / headers
> >>
> >> Sergey wrote:
> >>> one minor finding:
> >>> in staging repos [1] it has wrong description (1.9.2 instead of 1.19.2)
> >>
> >> Good catch Sergey!
> >>
> >> Alex wrote:
> >>> I realized I made a typo after publishing the staging artifacts.
> >>> Renaming is not allowed, unfortunately.
> >>
> >> No worries, the description won't be relevant beyond the staging
> >> phase. We use the direct link to verify, which you supplied. Besides,
> >> Apache releases formally only require proper source code.
> >>
> >> Thanks for doing the release!
> >>
> >> -Max
> >>
> >> On Mon, Feb 3, 2025 at 11:57 AM Alexander Fedulov
> >>  wrote:
> >>>
>  one minor finding:
>  in staging repos [1] it has wrong description (1.9.2 instead of
> 1.19.2)
> >>>
> >>> I realized I made a typo after publishing the staging artifacts.
> >>> Renaming is not allowed, unfortunately.
> >>> As far as I can tell, it is only relevant for the staging area itself,
> >>> so, unless I am missing something, I believe it is not needed to start
> >>> over just because of it.
> >>>
> >>> Best,
> >>> Ale
> >>>
> >>> On Mon, 3 Feb 2025 at 00:37, Sergey Nuyanzin 
> >> wrote:
> 
>  +1 (non-binfing)
> 
>  one minor finding:
>  in staging repos [1] it has wrong description (1.9.2 instead of
> 1.19.2)
> 
> 
>  - verified signatures && hashsums
>  - checked git tag
>  - checked no binaries in source files
>  - built the source with jdk1.8 and maven 3.8.6
> 
>  [1] https://repository.apache.org/#stagingRepositories
> 
>  On Thu, Jan 30, 2025 at 5:45 PM Ferenc Csaky
> >> 
>  wrote:
> 
> > +1 (non-binfing)
> >
> > - verified signatures
> > - verified hashsums
> > - checked GH release tag
> > - checked no binaries in source archives
> > - built the source with jdk8 and maven 3.8.6
> > - reviewed web PR
> > - deployed WordCount job to local cluster
> >
> > Thanks,
> > Ferenc
> >
> >
> >
> >
> > On Thursday, January 30th, 2025 at 03:22, Yanquan Lv <
> >> decq12y...@gmail.com>
> > wrote:
> >
> >>
> >>
> >> +1 (non-binding)
> >>
> >> I checked:
> >> - Review JIRA release notes
> >> - Verify hashes and verify signatures
> >> - Build success from source with JDK11 & maven3.8.6
> >> - Source code artifacts matching the current release
> >> - Read the announcement blog and LGTM
> >>
> >> -- Forwarded message -
> >> 发件人: Alexander Fedulov alexander.fedu...@gmail.com
> >>
> >> Date: 2025年1月29日周三 上午1:30
> >> Subject: [VOTE] Release 1.19.2, release candidate #1
> >> To: dev dev@flink.apache.org
> >>
> >>
> >>
> >> Hi everyone,
> >>
> >> Please review and vote on the release candidate #1 for the version
> >> 1.19.2, as follows:
> >> [ ] +1, Approve the release
> >> [ ] -1, Do not approve the release (please provide specific
> >> comments)
> >>
> >> The staging area contains the following artifacts:
> >> * JIRA release notes [1],
> >> * the official Apache source release and binary convenience
> >> releases
> >> to be deployed to dist.apache.org [2], which are signed with the
> >> key
> >> with fingerprint 8C1FC56D16B0029D [3],
> >> * all artifacts to be deployed to the Maven Central Repository [4],
> >> * source code tag "release-1.19.2-rc1" [5],
> >> * website pull request listing the new release and adding
> >> announcement
> >> blog post [6].
> >>
> >> The vote will be open for at least 72 hours. It is adopted by
> >> majority
> >> approval, with at least 3 PMC affirmative votes.
> >>
> >> Verification instruction can be found here [7] . You’re not
> >> required
> >> to verify everything, but please mention what you have tested along
> >> with your +/- vote.
> >>
> >> Thanks,
> >> Alex
> >>
> >> [1]
> >>
> >
> >>
> https://issues.apache.org/jira/secure/Rele

[jira] [Created] (FLINK-37302) Support timers in PTFs

2025-02-11 Thread Timo Walther (Jira)
Timo Walther created FLINK-37302:


 Summary: Support timers in PTFs
 Key: FLINK-37302
 URL: https://issues.apache.org/jira/browse/FLINK-37302
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API, Table SQL / Planner
Reporter: Timo Walther
Assignee: Timo Walther


Support timer capabilities as described in FLIP-440.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] FLIP-508: Add support for Smile format for Compiled plans

2025-02-11 Thread Sergey Nuyanzin
Thanks for clarifying this Timo!


If there is no other feedback, I'm going to start voting in a couple of days

On Fri, Feb 7, 2025 at 5:13 PM Gyula Fóra  wrote:

> Thanks Timo for the extra info on the context of this change.
>
> +1
>
> Gyula
>
> On Fri, Feb 7, 2025 at 5:05 PM Timo Walther  wrote:
>
> > Hi Gyula,
> >
> > Sergey and I spent a significant amount of time in researching different
> > formats.
> >
> > When we introduced CompiledPlan, the question whether we want to use a
> > binary format for performance and efficiency reasons immediately came
> > up, but we decided to postpone this discussion.
> >
> > Back then we thought about BSON, but looking at the most popular formats
> > that support all JSON types natively and offer a lossless conversion
> > between JSON and the binary format we ended up with Smile. Also given
> > that we have a large Jackson-based code base that is able to
> > serialize/deserialize all RexNode, StreamExecNode, DataTypes etc. Smile
> > seems to be the best fit.
> >
> > For clarification: We won't change the default serialization. All
> > methods writeToFile/readFromFile() and APIs (e.g. EXECUTE COMPILED PLAN)
> > still operate primarily on JSON. The binary format is mostly intended
> > for advanced use cases. The given Flink API can be used to convert to
> > JSON at any time.
> >
> > Regards,
> > Timo
> >
> >
> > On 07.02.25 16:14, Gyula Fóra wrote:
> > > Hey!
> > > Do we have some examples of other frameworks/projects etc using the
> Smile
> > > format?
> > >
> > > This seems to be a somewhat arbitrary change with regard to the
> selected
> > > format, my concern is that this will make the compiled plan less useful
> > in
> > > general as it's harder to parse with standard tools.
> > >
> > > What is the main problem with the current json format?
> > >
> > > Thanks
> > > Gyula
> > >
> > > On Fri, Feb 7, 2025 at 3:31 PM Sergey Nuyanzin 
> > wrote:
> > >
> > >> Hi everyone,
> > >>
> > >> I would like to initiate a discussion for the FLIP-508[1] below, which
> > adds
> > >> support for Smile[2] format for Compiled plans
> > >>
> > >> Looking forward to hearing from you.
> > >>
> > >> [1]
> > >>
> > >>
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-508%3A+Add+support+for+Smile+format+for+Compiled+plans
> > >> [2] https://github.com/FasterXML/smile-format-specification
> > >>
> > >> --
> > >> Best regards,
> > >> Sergey
> > >>
> > >
> >
> >
>


-- 
Best regards,
Sergey


[jira] [Created] (FLINK-37303) Bump guava to 33.4.0

2025-02-11 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-37303:
---

 Summary: Bump guava to 33.4.0
 Key: FLINK-37303
 URL: https://issues.apache.org/jira/browse/FLINK-37303
 Project: Flink
  Issue Type: Technical Debt
Affects Versions: shaded-19.0
Reporter: Sergey Nuyanzin
Assignee: Sergey Nuyanzin


Among others there is a fix for NPE in {{ImmutableMap.Builder}} 
https://github.com/google/guava/commit/70a98115d828f17d8c997b43347b4ce989130bce
and some perf improvements in loggers initialization 
https://github.com/google/guava/commit/4fe1df56bd74e9eec8847bdb15c5be51f528e8c8



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[ANNOUNCE] Apache flink-connector-hive 3.0.0 released

2025-02-11 Thread Sergey Nuyanzin
The Apache Flink community is very happy to announce the release of
Apache flink-connector-hive 3.0.0.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data
streaming applications.

The release is available for download at:
https://flink.apache.org/downloads.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12352591

We would like to thank all contributors of the Apache Flink community
who made this release possible!

Regards,
Release Manager


Re: Flink CDC to Paimon

2025-02-11 Thread Xiqian YU
Hi Taher,

Since we’re creating a DataStream-based pipeline job with SQL Server CDC, 
schema change events must be handled manually. A possible approach would be:

1) Enable schema change events with `.includeSchemaChanges(true)` option, so 
DDL events will be parsed and encoded in `SourceRecord`s.

2) Write a customized `DebeziumDeserializationSchema` class and parse schema 
change events. `MySqlEventDeserializer#deserializeSchemaChangeRecord` could be 
used as a reference [1].

3) Evolve sink schema of Paimon tables with `PaimonCatalog` manually. 
`PaimonMetadataApplier` [2] is an existing schema evolving implementation 
supporting a few frequently used schema change events.

Also, CDC Pipeline framework [3] has provided a fully-automatic schema sensing 
and evolving solution, but unfortunately Microsoft SQL Server source is not 
supported yet until we close #3445 [4] or #3507 [5].

[1] 
https://github.com/apache/flink-cdc/blob/master/flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-mysql/src/main/java/org/apache/flink/cdc/connectors/mysql/source/MySqlEventDeserializer.java
[2] 
https://github.com/apache/flink-cdc/blob/master/flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-paimon/src/main/java/org/apache/flink/cdc/connectors/paimon/sink/PaimonMetadataApplier.java
[3] 
https://nightlies.apache.org/flink/flink-cdc-docs-release-3.3/docs/core-concept/data-pipeline/
[4] https://github.com/apache/flink-cdc/pull/3445
[5] https://github.com/apache/flink-cdc/pull/3507

Best Regards,
Xiqian

Taher Koitawala  於 2025年2月11日 15:59 寫道:

Hi Devs,
  As a POC we are trying to create a steaming pipeline from MSSQL cdc
to Paimon:

To do this we are doing
1. msSql server cdc operator
2. Transform operator
3. paimon sink

We have written the cdc connector with is a JsonDebeziumDeserialisedSchema
String

I wish to write this paimon in a table format with same columns as source.

As far as i know paimon automatically handles schema updates like new field
additions.

Please can someone point me on how to write this stream efficiently to
paimon table with schema updates?

For now i have SouceFunction

Which is the record mentioned above!

Regards,
Taher Koitawala



[jira] [Created] (FLINK-37300) Add database native type information to Column struct to generate ddl correctly

2025-02-11 Thread He Wang (Jira)
He Wang created FLINK-37300:
---

 Summary: Add database native type information to Column struct to 
generate ddl correctly
 Key: FLINK-37300
 URL: https://issues.apache.org/jira/browse/FLINK-37300
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Reporter: He Wang


Many data types in the database do not have a one-to-one correspondence with 
the data types in Flink. So to ensure that the data types on the source and 
sink sides match accurately, we need to add the column type information in the 
database to the Column struct.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Flink CDC to Paimon

2025-02-11 Thread Taher Koitawala
Hi Devs,
   As a POC we are trying to create a steaming pipeline from MSSQL cdc
to Paimon:

To do this we are doing
1. msSql server cdc operator
2. Transform operator
3. paimon sink

We have written the cdc connector with is a JsonDebeziumDeserialisedSchema
String

I wish to write this paimon in a table format with same columns as source.

As far as i know paimon automatically handles schema updates like new field
additions.

Please can someone point me on how to write this stream efficiently to
paimon table with schema updates?

For now i have SouceFunction

Which is the record mentioned above!

Regards,
Taher Koitawala


[DISCUSS] Pluggable Batching for Async Sink in Flink

2025-02-11 Thread Poorvank Bhatia
Hey everyone,

I’d like to propose adding a pluggable batching mechanism to AsyncSinkWriter

 to enable custom batch formation strategies.
Currently, batching is based on batch size and record count, but this
approach is suboptimal for sinks like Cassandra, which require
partition-aware batching. Specifically, batches should be formed so that
all requests within a batch belong to the same partition, ensuring more
efficient writes.

The proposal introduces a minimal `BatchCreator` interface, enabling users
to define custom batching strategies while maintaining backward
compatibility with a default implementation.

For full details, please refer to the proposal document

.
Associated Jira 

Thanks,
Poorvank


Re: [VOTE] Release 1.20.1, release candidate #1

2025-02-11 Thread Leonard Xu
+1 (binding)

- verified signatures
- verified hashsums 
- built from source code with JDK 1.8 succeeded
- checked Github release tag
- checked release notes
- reviewed the web PR 

Best,
Leonard

> 2025年2月7日 19:08,Rui Fan <1996fan...@gmail.com> 写道:
> 
> +1 (binding)
> 
> 1. Verified the archives, checksums, and signatures
> 2. Extracted and inspected the source code for binaries
> 3. Built the source code
> 4. Reviewed web PR and left one comment
> 
> Best,
> Rui
> 
> On Fri, Feb 7, 2025 at 6:54 PM Gyula Fóra  wrote:
> 
>> +1 (binding)
>> 
>> - Reviewed release notes
>> - Verified hashes, signatures, built from source
>> - Verified release artifacts
>> - Checked website PR
>> 
>> Cheers
>> Gyula
>> 
>> On Fri, Feb 7, 2025 at 10:16 AM David Radley 
>> wrote:
>> 
>>> Hi Alex,
>>> 
>>> +1 (non-binding)
>>> I checked:
>>> - Reviewed JIRA release notes
>>> - Verify hashes and verify signatures
>>> - Source code artifacts matching the current release
>>> - Read the announcement blog and LGTM
>>> 
>>> Thanks for the pointers.
>>> 
>>> 1. Yes this worked
>>> 2. this did not work but I downloaded the KEYS file and imported it and
>>> gpg --import < KEYS worked
>>> I got this warning with the verification.
>>> Warning gpg: WARNING: This key is not certified with a trusted signature!
>>> gpg:  There is no indication that the signature belongs to the
>>> owner.
>>> 
>>> 
>>> 
>>> 
>>> From: Alexander Fedulov 
>>> Date: Wednesday, 5 February 2025 at 17:30
>>> To: dev@flink.apache.org 
>>> Subject: [EXTERNAL] Re: [VOTE] Release 1.20.1, release candidate #1
>>> Hi David,
>>> 
>>> Thanks for verifying the release.
>>> 
>>> 1. sha256 and sha512 are not expected to be the same. Try
>>> shasum -a 512 flink-1.20.1-bin-scala_2.12.tgz
>>> 
>>> 2. I believe you do not have my public key imported. You can find it
>>> in the project KEYS file (afedulov) [1]. Try
>>> gpg --keyserver keys.openpgp.org --recv-key 8C1FC56D16B0029D
>>> 
>>> Best,
>>> Alex
>>> 
>>> [1] https://dist.apache.org/repos/dist/release/flink/KEYS
>>> 
>>> On Tue, 4 Feb 2025 at 15:03, David Radley 
>> wrote:
 
 Hi Alex,
 Thanks for driving this release
 
 Checking the sha’s  as per
>> https://www.apache.org/info/verification.html
>>> and see
 
  *   shasum -a 256 flink-1.20.1-bin-scala_2.12.tgz
 5fc4551cd11aee83a9569392339c43fb32a60847db456e1cb4fa64c8daae0186
>>> flink-1.20.1-bin-scala_2.12.tgz
 
  *
>>> 
>> https://dist.apache.org/repos/dist/dev/flink/flink-1.20.1-rc1/flink-1.20.1-bin-scala_2.12.tgz.sha512
 is
 
>>> 
>> c50105a095839c663074d6a242e72d0e27886f584e0d568a89e3cf84b87da2b5cf188e230f65890a4622192ddad49b347d57ea5fe1c3510d27484b64a4b4c415
>>> flink-1.20.1-bin-scala_2.12.tgz
 
 I was expecting the long numbers to be the same.
 
 I was checking  the asc file and got
 
 gpg --verify flink-1.20.1-bin-scala_2.12.tgz.asc
>>> flink-1.20.1-bin-scala_2.12.tgz
 gpg: Signature made Wed 29 Jan 00:08:19 2025 GMT
 gpg:using RSA key
>>> 5575E80D59BBB73C15A479B88C1FC56D16B0029D
 gpg: Can't check signature: No public key
 
 i.e. where appeared to be an error message.
 
 Am I missing something?
   kind regards, David.
 
 
 
 From: Alexander Fedulov 
 Date: Wednesday, 29 January 2025 at 12:32
 To: dev 
 Subject: [EXTERNAL] [VOTE] Release 1.20.1, release candidate #1
 Hi everyone,
 
 Please review and vote on the release candidate #1 for the version
 1.20.1, as follows:
 [ ] +1, Approve the release
 [ ] -1, Do not approve the release (please provide specific comments)
 
 The staging area contains the following artifacts:
 * JIRA release notes [1],
 * the official Apache source release and binary convenience releases
 to be deployed to dist.apache.org [2], which are signed with the key
 with fingerprint 8C1FC56D16B0029D [3],
 * all artifacts to be deployed to the Maven Central Repository [4],
 * source code tag "release-1.20.1-rc1" [5],
 * website pull request listing the new release and adding announcement
 blog post [6].
 
 The vote will be open for at least 72 hours. It is adopted by majority
 approval, with at least 3 PMC affirmative votes.
 
 Verification instruction can be found here [7] . You’re not required
 to verify everything, but please mention what you have tested along
 with your +/- vote.
 
 Thanks,
 Alex
 
 [1]
>>> 
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354994
 [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.20.1-rc1/
 [3] https://dist.apache.org/repos/dist/release/flink/KEYS
 [4]
>>> 
>> https://repository.apache.org/content/repositories/orgapacheflink-1783/org/apache/flink/
 [5] https://github.com/apache/flink/tree/release-1.20.1-rc1
 [6] https://github.com/apache/flink-web/pull/772
 [7]
>>> 
>> https://cwiki.apa

Re: [DISCUSS] FLIP-XXX: Blue/Green Deployments for Flink on Kubernetes: Phase 1 (basic)

2025-02-11 Thread Sergio Chong Loo
Hi Gyula,

Great questions, I’ll track these topics in our docs accordingly as well.

> - What will be the naming convention for the created FlinkDeployment A/B?
> Should we introduce some logic for the users to control this?


Currently, the controller takes the original resource name as the main prefix 
and adds the “-a” or “-b” suffixes (in an alternating fashion) to distinguish 
them. We could switch this to a numeric pattern.

We could indeed allow the user to have some control on the deployments’ name 
prefixes or even the _type_ of suffixes. Thoughts?

> - Can the user "turn" and existing FlinkDeployment into a Blue / Green
> deployment?

This is a very good idea, we could introduce a “flag” in the CRD that would 
instruct the controller to treat an existing FlinkDeployment as an “-a” type 
and proceed redeploying it as a Blue/Green instead.

> - Did you consider alternative names for this CR?

This is one of the most open topics, some other ideas were “Active/Standby” or 
“Rolling Deployments”… “Blue/Green” simply stuck a bit more. Any other 
suggestions?

Thanks,
Sergio


> On Feb 9, 2025, at 5:17 PM, Gyula Fóra  wrote:
> 
> Hi Sergio!
> 
> I think this will be a great addition to the operator and is a feature
> request that comes up again and again.
> 
> Some minor comments/question:
> - What will be the naming convention for the created FlinkDeployment A/B?
> Should we introduce some logic for the users to control this?
> - Can the user "turn" and existing FlinkDeployment into a Blue / Green
> deployment?
> - Did you consider alternative names for this CR?
> 
> Cheers,
> Gyula
> 
> On Fri, Jan 24, 2025 at 6:00 PM Gyula Fóra  wrote:
> 
>> Hi Eric,
>> 
>> The link is fixed and the FLIP contains everything from the google doc, I
>> updated the link there as well.
>> 
>> Thanks
>> Gyula
>> 
>> On Fri, Jan 24, 2025 at 5:55 PM Eric Xiao 
>> wrote:
>> 
>>> Hi Sergio,
>>> 
>>> Can you update the Phase 1 Google Doc's sharing permissions? I also
>>> believe
>>> the link in the FLIP leads to an internal Apple tool:
>>> 
>>> https://quip-apple.com/account/login?next=https%3A%2F%2Fquip-apple.com%2F7BpiAdeZ7Ow3
>>> 
>>> On Tue, Jan 14, 2025 at 12:15 PM Sergio Chong Loo
>>>  wrote:
>>> 
 FLIP-503:
 
>>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=337677648
 
 - Sergio
 
 
> On Jan 13, 2025, at 2:39 PM, Sergio Chong Loo 
 wrote:
> 
> Hi folks,
> 
> As proposed in [1] we would like to more formally continue the
 discussion to add Blue/Green deployments support to Flink via the
 Kubernetes Operator.
> 
> For clarity and easier review experience we’ve separated this effort
 into 2 phases:
> 
> 1) Blue/Green Deployments for Flink on Kubernetes: Phase 1 (basic) -
 THIS FLIP
> 
> 2) Blue/Green Deployments for Flink on Kubernetes: Phase 2 (with
 Coordination) - in its corresponding FLIP/email, which will follow
>>> shortly
> 
> 
> Phase 1 Google Doc:
 
>>> https://docs.google.com/document/d/159I9kPmHkPMNoKp7iIgntMZjrGz5J2_svOfuaNvV5HA/edit?pli=1&tab=t.0
> 
> 
> Thanks everyone in advance, we’re really excited to bring this feature
 to the community!
> 
> - Sergio
> 
> 
> [1] https://lists.apache.org/thread/m2sqgz455fzlvp0h9kbs1zmc5gj2s162
 
 
>>> 
>> 



[jira] [Created] (FLINK-37301) Fix wrong flag returned by ForStStateBackend#supportsSavepointFormat

2025-02-11 Thread Zakelly Lan (Jira)
Zakelly Lan created FLINK-37301:
---

 Summary: Fix wrong flag returned by 
ForStStateBackend#supportsSavepointFormat
 Key: FLINK-37301
 URL: https://issues.apache.org/jira/browse/FLINK-37301
 Project: Flink
  Issue Type: Bug
Reporter: Zakelly Lan


The `ForStStateBackend` did not override the `supportsSavepointFormat`, which 
means it gives the wrong flag. We should fix this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release 1.19.2, release candidate #1

2025-02-11 Thread Alexander Fedulov
Thanks everyone for the verification and the votes.
The voting is now closed.

Best,
Alex

On Tue, 11 Feb 2025 at 14:30, Robert Metzger  wrote:
>
> +1 (binding)
>
> - maven staging repo contents look fine
> - src archive looks fine
> - local cluster starts on macos from binary release artifacts
>
>
> On Tue, Feb 11, 2025 at 1:38 PM Leonard Xu  wrote:
>
> > +1 (binding)
> >
> > - verified signatures
> > - verified hashsums
> > - built from source code with JDK 1.8 succeeded
> > - checked Github release tag
> > - checked release notes
> > - reviewed the web PR
> >
> > Best,
> > Leonard
> >
> > > 2025年2月11日 19:09,Márton Balassi  写道:
> > >
> > > +1 (binding)
> > >
> > > 1. Verified the archives, checksums, and signatures
> > > 2. Extracted and inspected the source code for binaries
> > > 3. Built the source code
> > > 4. Verified license files / headers
> > >
> > > Best,
> > > Marton
> > >
> > > On Mon, Feb 3, 2025 at 12:37 PM Maximilian Michels 
> > wrote:
> > >
> > >> +1 (binding)
> > >>
> > >> 1. Verified the archives, checksums, and signatures
> > >> 2. Extracted and inspected the source code for binaries
> > >> 3. Built the source code
> > >> 4. Verified license files / headers
> > >>
> > >> Sergey wrote:
> > >>> one minor finding:
> > >>> in staging repos [1] it has wrong description (1.9.2 instead of 1.19.2)
> > >>
> > >> Good catch Sergey!
> > >>
> > >> Alex wrote:
> > >>> I realized I made a typo after publishing the staging artifacts.
> > >>> Renaming is not allowed, unfortunately.
> > >>
> > >> No worries, the description won't be relevant beyond the staging
> > >> phase. We use the direct link to verify, which you supplied. Besides,
> > >> Apache releases formally only require proper source code.
> > >>
> > >> Thanks for doing the release!
> > >>
> > >> -Max
> > >>
> > >> On Mon, Feb 3, 2025 at 11:57 AM Alexander Fedulov
> > >>  wrote:
> > >>>
> >  one minor finding:
> >  in staging repos [1] it has wrong description (1.9.2 instead of
> > 1.19.2)
> > >>>
> > >>> I realized I made a typo after publishing the staging artifacts.
> > >>> Renaming is not allowed, unfortunately.
> > >>> As far as I can tell, it is only relevant for the staging area itself,
> > >>> so, unless I am missing something, I believe it is not needed to start
> > >>> over just because of it.
> > >>>
> > >>> Best,
> > >>> Ale
> > >>>
> > >>> On Mon, 3 Feb 2025 at 00:37, Sergey Nuyanzin 
> > >> wrote:
> > 
> >  +1 (non-binfing)
> > 
> >  one minor finding:
> >  in staging repos [1] it has wrong description (1.9.2 instead of
> > 1.19.2)
> > 
> > 
> >  - verified signatures && hashsums
> >  - checked git tag
> >  - checked no binaries in source files
> >  - built the source with jdk1.8 and maven 3.8.6
> > 
> >  [1] https://repository.apache.org/#stagingRepositories
> > 
> >  On Thu, Jan 30, 2025 at 5:45 PM Ferenc Csaky
> > >> 
> >  wrote:
> > 
> > > +1 (non-binfing)
> > >
> > > - verified signatures
> > > - verified hashsums
> > > - checked GH release tag
> > > - checked no binaries in source archives
> > > - built the source with jdk8 and maven 3.8.6
> > > - reviewed web PR
> > > - deployed WordCount job to local cluster
> > >
> > > Thanks,
> > > Ferenc
> > >
> > >
> > >
> > >
> > > On Thursday, January 30th, 2025 at 03:22, Yanquan Lv <
> > >> decq12y...@gmail.com>
> > > wrote:
> > >
> > >>
> > >>
> > >> +1 (non-binding)
> > >>
> > >> I checked:
> > >> - Review JIRA release notes
> > >> - Verify hashes and verify signatures
> > >> - Build success from source with JDK11 & maven3.8.6
> > >> - Source code artifacts matching the current release
> > >> - Read the announcement blog and LGTM
> > >>
> > >> -- Forwarded message -
> > >> 发件人: Alexander Fedulov alexander.fedu...@gmail.com
> > >>
> > >> Date: 2025年1月29日周三 上午1:30
> > >> Subject: [VOTE] Release 1.19.2, release candidate #1
> > >> To: dev dev@flink.apache.org
> > >>
> > >>
> > >>
> > >> Hi everyone,
> > >>
> > >> Please review and vote on the release candidate #1 for the version
> > >> 1.19.2, as follows:
> > >> [ ] +1, Approve the release
> > >> [ ] -1, Do not approve the release (please provide specific
> > >> comments)
> > >>
> > >> The staging area contains the following artifacts:
> > >> * JIRA release notes [1],
> > >> * the official Apache source release and binary convenience
> > >> releases
> > >> to be deployed to dist.apache.org [2], which are signed with the
> > >> key
> > >> with fingerprint 8C1FC56D16B0029D [3],
> > >> * all artifacts to be deployed to the Maven Central Repository [4],
> > >> * source code tag "release-1.19.2-rc1" [5],
> > >> * website pull request listing the new release and adding
> > >> announcement
> > >> blog post [6].
> > >>
> > >

[RESULT] [VOTE] Release 1.19.2, release candidate #1

2025-02-11 Thread Alexander Fedulov
Hi all,

I'm happy to announce that we have unanimously approved this release.

There are XXX approving votes, XXX of which are binding:
* Maximilian Michels (binding)
* Marton Balassi (binding)
* Leonard Xu (binding)
* Robert Merzger (binding)
* Yanquan Lv (non-binding)
* Ferenc Csaky (non-binding)
* Sergey Nuyanzin (non-binding)

There are no disapproving votes.

Thanks everyone!

[1] https://lists.apache.org/thread/9tqhyc160svt8q697gnn76djdxfd5hzg

Best,
Alex


Re: [VOTE] Release 1.20.1, release candidate #1

2025-02-11 Thread Alexander Fedulov
Thanks everyone for the verification and the votes.
The voting is now closed.

Best,
Alex

On Tue, 11 Feb 2025 at 13:51, Leonard Xu  wrote:
>
> +1 (binding)
>
> - verified signatures
> - verified hashsums
> - built from source code with JDK 1.8 succeeded
> - checked Github release tag
> - checked release notes
> - reviewed the web PR
>
> Best,
> Leonard
>
> > 2025年2月7日 19:08,Rui Fan <1996fan...@gmail.com> 写道:
> >
> > +1 (binding)
> >
> > 1. Verified the archives, checksums, and signatures
> > 2. Extracted and inspected the source code for binaries
> > 3. Built the source code
> > 4. Reviewed web PR and left one comment
> >
> > Best,
> > Rui
> >
> > On Fri, Feb 7, 2025 at 6:54 PM Gyula Fóra  wrote:
> >
> >> +1 (binding)
> >>
> >> - Reviewed release notes
> >> - Verified hashes, signatures, built from source
> >> - Verified release artifacts
> >> - Checked website PR
> >>
> >> Cheers
> >> Gyula
> >>
> >> On Fri, Feb 7, 2025 at 10:16 AM David Radley 
> >> wrote:
> >>
> >>> Hi Alex,
> >>>
> >>> +1 (non-binding)
> >>> I checked:
> >>> - Reviewed JIRA release notes
> >>> - Verify hashes and verify signatures
> >>> - Source code artifacts matching the current release
> >>> - Read the announcement blog and LGTM
> >>>
> >>> Thanks for the pointers.
> >>>
> >>> 1. Yes this worked
> >>> 2. this did not work but I downloaded the KEYS file and imported it and
> >>> gpg --import < KEYS worked
> >>> I got this warning with the verification.
> >>> Warning gpg: WARNING: This key is not certified with a trusted signature!
> >>> gpg:  There is no indication that the signature belongs to the
> >>> owner.
> >>>
> >>>
> >>>
> >>>
> >>> From: Alexander Fedulov 
> >>> Date: Wednesday, 5 February 2025 at 17:30
> >>> To: dev@flink.apache.org 
> >>> Subject: [EXTERNAL] Re: [VOTE] Release 1.20.1, release candidate #1
> >>> Hi David,
> >>>
> >>> Thanks for verifying the release.
> >>>
> >>> 1. sha256 and sha512 are not expected to be the same. Try
> >>> shasum -a 512 flink-1.20.1-bin-scala_2.12.tgz
> >>>
> >>> 2. I believe you do not have my public key imported. You can find it
> >>> in the project KEYS file (afedulov) [1]. Try
> >>> gpg --keyserver keys.openpgp.org --recv-key 8C1FC56D16B0029D
> >>>
> >>> Best,
> >>> Alex
> >>>
> >>> [1] https://dist.apache.org/repos/dist/release/flink/KEYS
> >>>
> >>> On Tue, 4 Feb 2025 at 15:03, David Radley 
> >> wrote:
> 
>  Hi Alex,
>  Thanks for driving this release
> 
>  Checking the sha’s  as per
> >> https://www.apache.org/info/verification.html
> >>> and see
> 
>   *   shasum -a 256 flink-1.20.1-bin-scala_2.12.tgz
>  5fc4551cd11aee83a9569392339c43fb32a60847db456e1cb4fa64c8daae0186
> >>> flink-1.20.1-bin-scala_2.12.tgz
> 
>   *
> >>>
> >> https://dist.apache.org/repos/dist/dev/flink/flink-1.20.1-rc1/flink-1.20.1-bin-scala_2.12.tgz.sha512
>  is
> 
> >>>
> >> c50105a095839c663074d6a242e72d0e27886f584e0d568a89e3cf84b87da2b5cf188e230f65890a4622192ddad49b347d57ea5fe1c3510d27484b64a4b4c415
> >>> flink-1.20.1-bin-scala_2.12.tgz
> 
>  I was expecting the long numbers to be the same.
> 
>  I was checking  the asc file and got
> 
>  gpg --verify flink-1.20.1-bin-scala_2.12.tgz.asc
> >>> flink-1.20.1-bin-scala_2.12.tgz
>  gpg: Signature made Wed 29 Jan 00:08:19 2025 GMT
>  gpg:using RSA key
> >>> 5575E80D59BBB73C15A479B88C1FC56D16B0029D
>  gpg: Can't check signature: No public key
> 
>  i.e. where appeared to be an error message.
> 
>  Am I missing something?
>    kind regards, David.
> 
> 
> 
>  From: Alexander Fedulov 
>  Date: Wednesday, 29 January 2025 at 12:32
>  To: dev 
>  Subject: [EXTERNAL] [VOTE] Release 1.20.1, release candidate #1
>  Hi everyone,
> 
>  Please review and vote on the release candidate #1 for the version
>  1.20.1, as follows:
>  [ ] +1, Approve the release
>  [ ] -1, Do not approve the release (please provide specific comments)
> 
>  The staging area contains the following artifacts:
>  * JIRA release notes [1],
>  * the official Apache source release and binary convenience releases
>  to be deployed to dist.apache.org [2], which are signed with the key
>  with fingerprint 8C1FC56D16B0029D [3],
>  * all artifacts to be deployed to the Maven Central Repository [4],
>  * source code tag "release-1.20.1-rc1" [5],
>  * website pull request listing the new release and adding announcement
>  blog post [6].
> 
>  The vote will be open for at least 72 hours. It is adopted by majority
>  approval, with at least 3 PMC affirmative votes.
> 
>  Verification instruction can be found here [7] . You’re not required
>  to verify everything, but please mention what you have tested along
>  with your +/- vote.
> 
>  Thanks,
>  Alex
> 
>  [1]
> >>>
> >> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&versio

[RESULT] [VOTE] Release 1.20.1, release candidate #1

2025-02-11 Thread Alexander Fedulov
Hi all,

I'm happy to announce that we have unanimously approved this release [1].

There are XXX approving votes, XXX of which are binding:
* Maximilian Michels (binding)
* Weijie Guo (binding)
* Gyula Fora (binding)
* Leonard Xu (binding)
* Rui Fan (binding)
* Yanquan Lv (non-binding)
* Ferenc Csaky (non-binding)
* David Radley (non-binding)

There are no disapproving votes.

Thanks everyone!

[1] https://lists.apache.org/thread/n2cgjtgw5twq9p4dz86dl9fho6fmylq3

Best,
Alex


[jira] [Created] (FLINK-37304) Dynamic Kafka Source logs credentials in plaintext under INFO

2025-02-11 Thread Leonard Xu (Jira)
Leonard Xu created FLINK-37304:
--

 Summary: Dynamic Kafka Source logs credentials in plaintext under 
INFO
 Key: FLINK-37304
 URL: https://issues.apache.org/jira/browse/FLINK-37304
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: kafka-3.4.0
Reporter: Leonard Xu
 Fix For: kafka-4.0.0


The new Flink Dynamic Kafka Source logs credentials in plaintext under INFO. 
Specifically in 
https://github.com/apache/flink-connector-kafka/blob/f6a077a9dd8d1d5e43fc545cc9baab227d8438a0/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/dynamic/source/enumerator/DynamicKafkaSourceEnumerator.java#L350
 and 
https://github.com/apache/flink-connector-kafka/blob/f6a077a9dd8d1d5e43fc545cc9baab227d8438a0/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/dynamic/source/reader/DynamicKafkaSourceReader.java#L232




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-37308) HybridSourceReader doesn't support pauseOrResumeSplits

2025-02-11 Thread Xingcan Cui (Jira)
Xingcan Cui created FLINK-37308:
---

 Summary: HybridSourceReader doesn't support pauseOrResumeSplits
 Key: FLINK-37308
 URL: https://issues.apache.org/jira/browse/FLINK-37308
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Common
Affects Versions: 1.20.1
Reporter: Xingcan Cui


Currently, the {{HybridSourceReader}} doesn't implement the 
{{pauseOrResumeSplits()}} method and it prevents hybrid sources from 
functioning correctly with watermark alignment. I'm not quite sure if we can 
simply add the following implementation for it.
{code:java}
 @Override
 public void pauseOrResumeSplits(
         Collection splitsToPause,
         Collection splitsToResume) {
     currentReader.pauseOrResumeSplits(splitsToPause, splitsToResume);
  } {code}
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re:Re: [VOTE] FLIP-497: Early Fire Support for Flink SQL Interval Join

2025-02-11 Thread Xuyang
+1 (non-binding)--

Best!
Xuyang





在 2025-01-30 00:18:47,"Xingcan Cui"  写道:
>+1 (binding)
>
>Best,
>Xingcan
>
>On Mon, Jan 27, 2025 at 8:50 PM Venkatakrishnan Sowrirajan 
>wrote:
>
>> +1 (non-binding)
>>
>> Regards
>> Venkata krishnan
>>
>>
>> On Mon, Jan 27, 2025 at 2:05 PM Weiqing Yang 
>> wrote:
>>
>> > Hi All,
>> >
>> > I'd like to start a vote on FLIP-497: Early Fire Support for Flink SQL
>> > Interval Join [1].
>> > The discussion thread can be found here [2].
>> >
>> > The vote will remain open for at least 72 hours unless there are
>> objections
>> > or insufficient votes.
>> >
>> > [1]
>> >
>> >
>> https://urldefense.com/v3/__https://cwiki.apache.org/confluence/display/FLINK/FLIP-497*3A*Early*Fire*Support*for*Flink*SQL*Interval*Join__;JSsrKysrKysr!!IKRxdwAv5BmarQ!aEj330fMJ2JYFnEjs_E7A0oKffzEb3a1midZVVRlJW_m1qzZHtqEAdpCwnCSjKHGtvW_aJ6Ag5IrM0wafom7ryNjfxeb$
>> > [2]
>> >
>> https://urldefense.com/v3/__https://lists.apache.org/thread/p3w90rprdtv3vyjog3vl0rql5fvm703j__;!!IKRxdwAv5BmarQ!aEj330fMJ2JYFnEjs_E7A0oKffzEb3a1midZVVRlJW_m1qzZHtqEAdpCwnCSjKHGtvW_aJ6Ag5IrM0wafom7r35OCUZh$
>> >
>> >
>> > Best regards,
>> > Weiqing
>> >
>>


[jira] [Created] (FLINK-37307) Flink CDC CI failed due to OceanBaseE2eITCase

2025-02-11 Thread JunboWang (Jira)
JunboWang created FLINK-37307:
-

 Summary: Flink CDC CI failed due to OceanBaseE2eITCase
 Key: FLINK-37307
 URL: https://issues.apache.org/jira/browse/FLINK-37307
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Affects Versions: cdc-3.4.0
Reporter: JunboWang


[https://github.com/apache/flink-cdc/actions/runs/13260216427/job/37067737782?pr=3914]
{code:java}
// code placeholder
Error:  Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 541.368 
s <<< FAILURE! - in org.apache.flink.cdc.connectors.tests.OceanBaseE2eITCase
8360Error:  OceanBaseE2eITCase.testOceanBaseCDC  Time elapsed: 324.912 s  <<< 
FAILURE!
8361array lengths differed, expected.length=10 actual.length=11; arrays first 
differed at element [10]; expected: but was:<111,scooter,Big 
2-wheel scooter ,5.18,null,null>
8362at 
org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:89)
8363at 
org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:28)
8364at org.junit.Assert.internalArrayEquals(Assert.java:534)
8365at org.junit.Assert.assertArrayEquals(Assert.java:285)
8366at org.junit.Assert.assertArrayEquals(Assert.java:300)
8367at 
org.apache.flink.cdc.common.test.utils.JdbcProxy.checkResult(JdbcProxy.java:70)
8368at 
org.apache.flink.cdc.common.test.utils.JdbcProxy.checkResultWithTimeout(JdbcProxy.java:93)
8369at 
org.apache.flink.cdc.connectors.tests.OceanBaseE2eITCase.testOceanBaseCDC(OceanBaseE2eITCase.java:179)
8370at java.lang.reflect.Method.invoke(Method.java:498)
8371at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
8372at 
org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:29)
8373at 
org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:29)
8374at 
org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:29)
8375Caused by: java.lang.AssertionError: expected: but 
was:<111,scooter,Big 2-wheel scooter ,5.18,null,null>
8376at org.junit.Assert.fail(Assert.java:89)
8377at org.junit.Assert.failNotEquals(Assert.java:835)
8378at org.junit.Assert.assertEquals(Assert.java:120)
8379at org.junit.Assert.assertEquals(Assert.java:146)
8380at 
org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:87)
8381... 12 more {code}
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


退订

2025-02-11 Thread 乔志权
退订


 Replied Message 
| From | cun8cun8 |
| Date | 01/15/2025 10:20 |
| To | dev |
| Subject | 退 |

Re: [RESULT] [VOTE] Release 1.19.2, release candidate #1

2025-02-11 Thread Leonard Xu
Thanks Alex for driving this release, Good Job!

minor:XXX should be replaced by votes number, i.e. there are 7 approving votes, 
4 of which are binding

Best,
Leonard

> 2025年2月11日 23:29,Alexander Fedulov  写道:
> 
> Hi all,
> 
> I'm happy to announce that we have unanimously approved this release.
> 
> There are XXX approving votes, XXX of which are binding:
> * Maximilian Michels (binding)
> * Marton Balassi (binding)
> * Leonard Xu (binding)
> * Robert Merzger (binding)
> * Yanquan Lv (non-binding)
> * Ferenc Csaky (non-binding)
> * Sergey Nuyanzin (non-binding)
> 
> There are no disapproving votes.
> 
> Thanks everyone!
> 
> [1] https://lists.apache.org/thread/9tqhyc160svt8q697gnn76djdxfd5hzg
> 
> Best,
> Alex



[jira] [Created] (FLINK-37305) JDBC Connector CI failed due to network issue

2025-02-11 Thread Leonard Xu (Jira)
Leonard Xu created FLINK-37305:
--

 Summary: JDBC Connector CI failed due to network issue
 Key: FLINK-37305
 URL: https://issues.apache.org/jira/browse/FLINK-37305
 Project: Flink
  Issue Type: Bug
  Components: Connectors / JDBC
Affects Versions: jdbc-3.1.2
Reporter: Leonard Xu
 Fix For: jdbc-3.3.0


https://github.com/apache/flink-connector-jdbc/actions/runs/13192808286/job/36828738137
{code:java}
WARNING: ConnectionID:1 ClientConnectionId: 
ebda2785-c8c8-468f-8144-a23626291121 Prelogin error: host localhost port 32838 
Unexpected end of prelogin response after 0 bytes read
Error:  Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 241.187 
s <<< FAILURE! - in 
org.apache.flink.connector.jdbc.sqlserver.table.SqlServerDynamicTableSourceITCase
Error:  
org.apache.flink.connector.jdbc.sqlserver.table.SqlServerDynamicTableSourceITCase
  Time elapsed: 241.187 s  <<< ERROR!
org.testcontainers.containers.ContainerLaunchException: Container startup 
failed for image mcr.microsoft.com/azure-sql-edge:latest
at 
org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:359)
at 
org.testcontainers.containers.GenericContainer.start(GenericContainer.java:330)
at 
org.apache.flink.connector.jdbc.sqlserver.testutils.SqlServerDatabase$SqlServerContainer.start(SqlServerDatabase.java:81)
at 
org.apache.flink.connector.jdbc.testutils.resources.DockerResource.start(DockerResource.java:27)
at 
org.apache.flink.connector.jdbc.testutils.DatabaseExtension.lambda$startResource$1(DatabaseExtension.java:180)
at 
org.apache.flink.connector.jdbc.testutils.DatabaseExtension.getResource(DatabaseExtension.java:116)
at 
org.apache.flink.connector.jdbc.testutils.DatabaseExtension.beforeAll(DatabaseExtension.java:125)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
Suppressed: org.apache.flink.util.FlinkRuntimeException: Container is 
stopped.
at 
org.apache.flink.connector.jdbc.sqlserver.testutils.SqlServerDatabase.getMetadata(SqlServerDatabase.java:44)
at 
org.apache.flink.connector.jdbc.sqlserver.testutils.SqlServerDatabase.getMetadataDB(SqlServerDatabase.java:54)
at 
org.apache.flink.connector.jdbc.testutils.DatabaseExtension.lambda$getManagedTables$0(DatabaseExtension.java:77)
at java.base/java.util.Optional.ifPresent(Optional.java:183)
at 
org.apache.flink.connector.jdbc.testutils.DatabaseExtension.getManagedTables(DatabaseExtension.java:75)
at 
org.apache.flink.connector.jdbc.testutils.DatabaseExtension.getManagedTables(DatabaseExtension.java:66)
at 
org.apache.flink.connector.jdbc.testutils.DatabaseExtension.afterAll(DatabaseExtension.java:148)
... 1 more
Caused by: org.rnorth.ducttape.RetryCountExceededException: Retry limit hit 
with exception
at 
org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:88)
at 
org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:344)
... 7 more
Caused by: org.testcontainers.containers.ContainerLaunchException: Could not 
create/start container
at 
org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:563)
at 
org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:354)
at 
org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
... 8 more
Caused by: java.lang.IllegalStateException: Wait strategy failed. Container 
exited with code 1
at 
org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:533)
... 10 more
Caused by: java.lang.IllegalStateException: Container is started, but cannot be 
accessed by (JDBC URL: jdbc:sqlserver://localhost:32838;encrypt=false), please 
check container logs
at 
org.testcontainers.containers.JdbcDatabaseContainer.waitUntilContainerStarted(JdbcDatabaseContainer.java:209)
at 
org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:500)
... 10 more
Caused by: java.sql.SQLException: Could not create new connection
at 
org.testcontainers.containers.JdbcDatabaseContainer.createConnection(JdbcDatabaseContainer.java:295)
at 
org.testcontainers.containers.JdbcDatabaseContainer.createConnection(JdbcDatabaseContainer.java:251)
at 
org.testcontainers.containers.JdbcDatabaseContainer.waitUntilContainerStarted(JdbcDatabaseContainer.java:191)
... 11 more
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP 
connection to the host localhost, port 32838 has failed. Error: "Connection 
refused (Connection refused). Verify the connection properties. Make sure that 
an instance of SQL Server is running 

[jira] [Created] (FLINK-37306) Flink CDC CI failed due to timezone shift

2025-02-11 Thread Leonard Xu (Jira)
Leonard Xu created FLINK-37306:
--

 Summary: Flink CDC CI failed due to timezone shift
 Key: FLINK-37306
 URL: https://issues.apache.org/jira/browse/FLINK-37306
 Project: Flink
  Issue Type: Bug
Affects Versions: cdc-3.4.0
Reporter: Leonard Xu
 Fix For: cdc-3.4.0


https://github.com/apache/flink-cdc/actions/runs/13263036795/job/37023781884
{code:java}
org.apache.flink.table.api.ValidationException: The MySQL server has a timezone 
offset (0 seconds ahead of UTC) which does not match the configured timezone 
GMT+05:00. Specify the right server-time-zone to avoid inconsistencies for 
time-related fields.
at 
org.apache.flink.cdc.connectors.mysql.MySqlValidator.checkTimeZone(MySqlValidator.java:215)
 ~[classes/:?]
at 
org.apache.flink.cdc.connectors.mysql.MySqlValidator.validate(MySqlValidator.java:76)
 ~[classes/:?]
at 
org.apache.flink.cdc.connectors.mysql.source.MySqlSource.createEnumerator(MySqlSource.java:200)
 ~[classes/:?]
at 
org.apache.flink.runtime.source.coordinator.SourceCoordinator.start(SourceCoordinator.java:225)
 ~[flink-runtime-1.19.1.jar:1.19.1]
at 
org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator$DeferrableCoordinator.applyCall(RecreateOnResetOperatorCoordinator.java:332)
 ~[flink-runtime-1.19.1.jar:1.19.1]
at 
org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator.start(RecreateOnResetOperatorCoordinator.java:72)
 ~[flink-runtime-1.19.1.jar:1.19.1]
at 
org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:204)
 ~[flink-runtime-1.19.1.jar:1.19.1]
at 
org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:173)
 ~[flink-runtime-1.19.1.jar:1.19.1]
at 
org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:85)
 ~[flink-runtime-1.19.1.jar:1.19.1]
at 
org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:634)
 ~[flink-runtime-1.19.1.jar:1.19.1]
at 
org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:1076)
 ~[flink-runtime-1.19.1.jar:1.19.1]
at 
org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:993)
 ~[flink-runtime-1.19.1.jar:1.19.1]
at 
org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:433) 
~[flink-runtime-1.19.1.jar:1.19.1]
at 
org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:198)
 ~[flink-rpc-core-1.19.1.jar:1.19.1]
at 
org.apache.flink.runtime.rpc.pekko.PekkoRpcActor$StoppedState.lambda$start$0(PekkoRpcActor.java:618)
 ~[flink-rpc-akkadcc12604-509f-4681-a1e2-327606512cf0.jar:1.19.1]
at 
org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
 ~[flink-rpc-core-1.19.1.jar:1.19.1]
at 
org.apache.flink.runtime.rpc.pekko.PekkoRpcActor$StoppedState.start(PekkoRpcActor.java:617)
 ~[flink-rpc-akkadcc12604-509f-4681-a1e2-327606512cf0.jar:1.19.1]
at 
org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleControlMessage(PekkoRpcActor.java:190)
 ~[flink-rpc-akkadcc12604-509f-4681-a1e2-327606512cf0.jar:1.19.1]
at 
org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) 
[flink-rpc-akkadcc12604-509f-4681-a1e2-327606512cf0.jar:1.19.1]
at 
org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) 
[flink-rpc-akkadcc12604-509f-4681-a1e2-327606512cf0.jar:1.19.1]
at scala.PartialFunction.applyOrElse(PartialFunction.scala:127) 
[flink-rpc-akkadcc12604-509f-4681-a1e2-327606512cf0.jar:1.19.1]
at scala.PartialFunction.applyOrElse$(PartialFunction.scala:126) 
[flink-rpc-akkadcc12604-509f-4681-a1e2-327606512cf0.jar:1.19.1]
at 
org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) 
[flink-rpc-akkadcc12604-509f-4681-a1e2-327606512cf0.jar:1.19.1]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:175) 
[flink-rpc-akkadcc12604-509f-4681-a1e2-327606512cf0.jar:1.19.1]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176) 
[flink-rpc-akkadcc12604-509f-4681-a1e2-327606512cf0.jar:1.19.1]
at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) 
[flink-rpc-akkadcc12604-509f-4681-a1e2-327606512cf0.jar:1.19.1]
at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) 
[flink-rpc-akkadcc12604-509f-4681-a1e2-327606512cf0.jar:1.19.1]
at 
org.apache.pekko.actor.AbstractActor.aroundReceive(AbstractActor.scala:229) 
[flink-rpc-akkadcc12604-509f-4681-a1e2-327606512cf0.jar:1.19.1]
at org.apache.pekko.actor.ActorCell.receiveMessage(