[jira] [Created] (FLINK-37651) Simplify StaticArgument builders for easier type inference

2025-04-10 Thread Timo Walther (Jira)
Timo Walther created FLINK-37651:


 Summary: Simplify StaticArgument builders for easier type inference
 Key: FLINK-37651
 URL: https://issues.apache.org/jira/browse/FLINK-37651
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Timo Walther
Assignee: Timo Walther


Implementing type strategies in PTFs needs to become easier.
Currently this code is needed if input == output:
{code}
@Override
public TypeInference getTypeInference(DataTypeFactory typeFactory) {
return TypeInference.newBuilder()
.staticArguments(
StaticArgument.table(
"input",
Row.class,
false,

EnumSet.of(StaticArgumentTrait.TABLE_AS_SET)))
.outputTypeStrategy(
callContext -> 
Optional.of(callContext.getArgumentDataTypes().get(0)))
.build();
}
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release flink-connector-jdbc v3.3.0, release candidate #1

2025-04-10 Thread Yanquan Lv
+1 (non-binding)
I checked:
- Review JIRA release notes
- Verify hashes and verify signatures
- Build success from source with JDK8 & maven3.8.6
- Source code artifacts matching the current release
- Read the announcement blog

Hang Ruan  于2025年4月10日周四 11:57写道:

> Hi everyone,
> Please review and vote on release candidate #1 for
> flink-connector-jdbc v3.3.0, as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release to be deployed to dist.apache.org
> [2], which are signed with the key with fingerprint BAF7F56F454D3FE7
> [3],
> * all artifacts to be deployed to the Maven Central Repository [4][5],
> * source code tag v3.3.0-rc1 [6],
> * website pull request listing the new release [7].
> * CI build of the tag [8].
>
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
>
> Thanks,
> Hang
>
> [1]
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354577
> [2]
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-jdbc-3.3.0-rc1
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4]
> https://repository.apache.org/content/repositories/staging/org/apache/flink/flink-connector-jdbc/3.3.0-1.19/
> [5]
> https://repository.apache.org/content/repositories/staging/org/apache/flink/flink-connector-jdbc/3.3.0-1.20/
> [6] https://github.com/apache/flink-connector-jdbc/releases/tag/v3.3.0-rc1
> [7] https://github.com/apache/flink-web/pull/788
> [8]
> https://github.com/apache/flink-connector-jdbc/actions/runs/14370662678
>


Re: [VOTE] Release flink-connector-jdbc v3.3.0, release candidate #1

2025-04-10 Thread Xiqian YU
+1 (non-binding)

Checklist:

- Verified tarball checksum and signatures are valid
- Confirmed jars were built with JDK 1.8.0_301 and Apache Maven 3.8.6, Bytecode 
version is 52.0
- Compiled codes successfully with JDK 8, 11, and 17
- Tested MySQL JDBC read / write SQL jobs manually with Flink 1.20
- Reviewed release note and flink-web PR

Best Regards,
Xiqian

> 2025年4月10日 11:57,Hang Ruan  写道:
> 
> Hi everyone,
> Please review and vote on release candidate #1 for
> flink-connector-jdbc v3.3.0, as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
> 
> 
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release to be deployed to dist.apache.org
> [2], which are signed with the key with fingerprint BAF7F56F454D3FE7
> [3],
> * all artifacts to be deployed to the Maven Central Repository [4][5],
> * source code tag v3.3.0-rc1 [6],
> * website pull request listing the new release [7].
> * CI build of the tag [8].
> 
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
> 
> Thanks,
> Hang
> 
> [1] 
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354577
> [2] 
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-jdbc-3.3.0-rc1
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4] 
> https://repository.apache.org/content/repositories/staging/org/apache/flink/flink-connector-jdbc/3.3.0-1.19/
> [5] 
> https://repository.apache.org/content/repositories/staging/org/apache/flink/flink-connector-jdbc/3.3.0-1.20/
> [6] https://github.com/apache/flink-connector-jdbc/releases/tag/v3.3.0-rc1
> [7] https://github.com/apache/flink-web/pull/788
> [8] https://github.com/apache/flink-connector-jdbc/actions/runs/14370662678



[jira] [Created] (FLINK-37649) Datagen connector cannot set length for collection type

2025-04-10 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-37649:
--

 Summary: Datagen connector cannot set length for collection type
 Key: FLINK-37649
 URL: https://issues.apache.org/jira/browse/FLINK-37649
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Common
Affects Versions: 2.0.0
Reporter: Weijie Guo
Assignee: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-37650) Enable Stale PR Github Action for Kafka Connector

2025-04-10 Thread Thomas Cooper (Jira)
Thomas Cooper created FLINK-37650:
-

 Summary: Enable Stale PR Github Action for Kafka Connector
 Key: FLINK-37650
 URL: https://issues.apache.org/jira/browse/FLINK-37650
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Kafka
Reporter: Thomas Cooper


Given that we have enabled the Stale PR Github action in the main Flink 
repository it would be good to enable it the connector repositories as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] FLIP-518: Introduce a community review process

2025-04-10 Thread Robert Metzger
Thanks for the FLIP.

In the benefits section, you are mentioning:


   - community reviewing then becomes a respected way to contribute to
   Flink on the road to becoming a committer. So, it is in the contributor’s
   interest to review.


Another benefit related to this is that it is easier for the PMC to screen
for committer candidates based on their review involvement and quality.

"*Update the wiki process"*

Which process do you mean? Can you share a link?

  > *community-reviewed-requires-deep-review*

I am not convinced that this is a good label to introduce, because it is an
"easy way out" for community reviewers. If the community review process is
for contributors to behave like committers, then the community should also
be involved in deep reviews. Otherwise, the PMC can not assess if a
community reviewer is ready for committership.

>  *community-reviewed-suggest-close*

How can community reviewers tell Flinkbot that they want to close a PR?
will we introduce a new command?

>*c**ommunity-health-initiative-reviewed*

I like that the other labels are all prefixed by `community-reviewed`, this
one isn't.
What's the benefit of adding a CHI-specific label, instead of just having
one process for community reviews? CHI members are part of this?

I know this is a bit of an evil "let's create performance metrics" thought,
but what do you think about measuring the accept rate of community
reviewers? E.g. for a user a, what % of their approved PRs have been merged
w/o further feedback by a committer?
I guess one problem of this idea is that folks can focus on just approving
typo fixes. E..g there will be an incentive for people to open typo fix
PRs, and there will be an incentive for folks to approve those. Just an
idea. Maybe for v2 of this.

Best,
Robert


On Mon, Mar 31, 2025 at 5:33 PM Doğuşcan Namal 
wrote:

> Yep, ship it.
>
> I don’t see any harm on start following the suggested process since it only
> adds a few labels to the existing PRs.
>
> We could learn more by time and improve the process on the way.
>
> QQ. Do the proposed flinkbot commands work already?
>
> On Fri, 21 Mar 2025 at 15:32, David Radley 
> wrote:
>
> > I would like to start a discussion around FLIP-518<
> > https://cwiki.apache.org/confluence/x/_wuWF> which proposes a new
> > community review process. Please let me know if you support this idea, or
> > would like changes.
> >
> > We hope this process improvement will
> >
> >   *   Encourage more people in the community review PRs, by formally
> > recognising reviews as contributions.
> >   *   Reducing the workload on committers.
> >   *   Reducing our technical backlog.
> >
> >
> > Kind regards, David.
> >
> > Unless otherwise stated above:
> >
> > IBM United Kingdom Limited
> > Registered in England and Wales with number 741598
> > Registered office: Building C, IBM Hursley Office, Hursley Park Road,
> > Winchester, Hampshire SO21 2JN
> >
>


Wikipedia update

2025-04-10 Thread David Radley
Hi,
I have updated the Wikipedia entry for Flink [1]  to mention version 2 and last 
year’s Flink forward. I know this is not the definitive support specification, 
I left 1.19 in still maintained state; I can amend if we think this is no 
longer accurate.
Kind regards, David.
[1] https://en.wikipedia.org/wiki/Apache_Flink

Unless otherwise stated above:

IBM United Kingdom Limited
Registered in England and Wales with number 741598
Registered office: Building C, IBM Hursley Office, Hursley Park Road, 
Winchester, Hampshire SO21 2JN


[jira] [Created] (FLINK-37648) Flink Kinesis connector lost track of 1 shard.

2025-04-10 Thread roland (Jira)
roland created FLINK-37648:
--

 Summary: Flink Kinesis connector lost track of 1 shard.
 Key: FLINK-37648
 URL: https://issues.apache.org/jira/browse/FLINK-37648
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kinesis
Affects Versions: 1.20.1
 Environment: Flink version: 1.20

flink-connector-aws-kinesis-streams' version: 5.0.0-1.20

kinesis stream's shard number: 2

consumer type: EFO

parallelism: 1
Reporter: roland


Hi,

I have a Flink 1.20 job which uses flink-connector-aws-kinesis-streams to 
consume from a Kinesis stream with 2 shards using EFO.

The job runs for 40 days without any issue and from one point, it failed to 
subscribe to both of the shard and only subscribed to shard-0. 

 

1. Before 2025-04-08 07:20 (UTC) , Flink job subscribed 2 shards:
 

 
{code:java}
2025-04-08 07:12:57,788 INFO  
org.apache.flink.connector.kinesis.source.reader.fanout.FanOutKinesisShardSubscription
 [] - Subscription complete - shardId-0001 
15:12:57.789
se-content-platform
2025-04-08 07:12:57,788 INFO  
org.apache.flink.connector.kinesis.source.reader.fanout.FanOutKinesisShardSubscription
 [] - Activating subscription to shard shardId-0001 with starting 
position StartingPosition{shardIteratorType=AFTER_SEQUENCE_NUMBER, 
startingMarker=49661185042803593150728679800204632415527945304203591698} for 
consumer arn:xxx
15:12:57.841
se-content-platform
2025-04-08 07:12:57,841 INFO  
org.apache.flink.connector.kinesis.source.reader.fanout.FanOutKinesisShardSubscription
 [] - Successfully subscribed to shard shardId-0001 at 
StartingPosition{shardIteratorType=AFTER_SEQUENCE_NUMBER, 
startingMarker=49661185042803593150728679800204632415527945304203591698} using 
consumer arn:xxx
15:12:57.841
se-content-platform
2025-04-08 07:12:57,841 INFO  
org.apache.flink.connector.kinesis.source.reader.fanout.FanOutKinesisShardSubscription
 [] - Successfully subscribed to shard shardId-0001 with starting 
position StartingPosition{shardIteratorType=AFTER_SEQUENCE_NUMBER, 
startingMarker=49661185042803593150728679800204632415527945304203591698} for 
consumer arn:xxx
15:15:06.360
se-content-platform
2025-04-08 07:15:06,360 INFO  
org.apache.flink.connector.kinesis.source.reader.fanout.FanOutKinesisShardSubscription
 [] - Activating subscription to shard shardId- with starting 
position StartingPosition{shardIteratorType=AFTER_SEQUENCE_NUMBER, 
startingMarker=49661066447418326619680247193290876689240616667852046338} for 
consumer arn:xxx{code}
As shown above, both shard-0 and shard-1 are subscribed and consumed.

2. After that timestamp, Flink job only subscribed 1 shard:

 
{code:java}
2025-04-08 07:20:06,432 INFO  
org.apache.flink.connector.kinesis.source.reader.fanout.FanOutKinesisShardSubscription
 [] - Subscription complete - shardId- (arn:
15:20:06.433
se-content-platform
2025-04-08 07:20:06,433 INFO  
org.apache.flink.connector.kinesis.source.reader.fanout.FanOutKinesisShardSubscription
 [] - Activating subscription to shard shardId- with starting 
position StartingPosition{shardIteratorType=AFTER_SEQUENCE_NUMBER, 
startingMarker=49661066447418326619680247330352841488049219966007771138} for 
consumer arn:
15:20:06.442
se-content-platform
2025-04-08 07:20:06,442 INFO  
org.apache.flink.connector.kinesis.source.reader.fanout.FanOutKinesisShardSubscription
 [] - Successfully subscribed to shard shardId- at 
StartingPosition{shardIteratorType=AFTER_SEQUENCE_NUMBER, 
startingMarker=49661066447418326619680247330352841488049219966007771138} using 
consumer arn:
15:20:06.442
se-content-platform
2025-04-08 07:20:06,442 INFO  
org.apache.flink.connector.kinesis.source.reader.fanout.FanOutKinesisShardSubscription
 [] - Successfully subscribed to shard shardId- with starting 
position StartingPosition{shardIteratorType=AFTER_SEQUENCE_NUMBER, 
startingMarker=49661066447418326619680247330352841488049219966007771138} for 
consumer arn:. {code}
 

 

There're no exception logs in between, however I found this warn log appeared 
every 40 min:

 
{code:java}
2025-04-09 09:35:29,477 WARN  io.netty.channel.DefaultChannelPipeline   
   [] - An exceptionCaught() event was fired, and it reached at the 
tail of the pipeline. It usually means the last handler in the pipeline did not 
handle the exception.
java.io.IOException: An error occurred on the connection: 
java.nio.channels.ClosedChannelException, [channel: 086eb48c]. All streams will 
be closed
at 
software.amazon.awssdk.http.nio.netty.internal.http2.MultiplexedChannelRecord.decorateConnectionException(MultiplexedChannelRecord.java:213)
 
~[blob_p-30497883442822f5c82caef062655823d22e6214-d096e666a38e1d8371c3989c721c6564:?]
at 
software.amazon.awssdk.http.nio.netty.internal.http2.MultiplexedChannelRecord.lambda$closeChildChannels$10(Multi

Re: [VOTE] Release flink-connector-elasticsearch v3.1.0, release candidate #1

2025-04-10 Thread weijie guo
Hi Yanquan:

> I noticed that we have supported ElasticSearch8 in this release and
provided the flink-connector-elasticsearch8 jar[1], but we did not
provide flink-sql-connector-elasticsearch8 jar like 6 & 7.
Is this as expected? and do we have a plan for the release of this sql
connector.

Yes, we do not support the es8 sql connector (:. But I want to give users a
release that supports 1.18,1.19, and 1.20 as soon as
possible(unfortunately, the last release v3.0 was up to supports flink
1.17...).

Anyway, what you mentioned is really a meaningful feature, but I would
prefer to include it in next release. WDYT?


Best regards,

Weijie


Yanquan Lv  于2025年4月9日周三 11:30写道:

> I checked:
> - Review JIRA release notes
> - Verify hashes and verify signatures
> - Build success from source with JDK11 & maven3.8.6
> - Source code artifacts matching the current release
> - Read the announcement blog and LGTM
>
> I noticed that we have supported ElasticSearch8 in this release and
> provided the flink-connector-elasticsearch8 jar[1], but we did not
> provide flink-sql-connector-elasticsearch8 jar like 6 & 7.
> Is this as expected? and do we have a plan for the release of this sql
> connector.
>
> [1]
>
> https://repository.apache.org/content/repositories/orgapacheflink-1795/org/apache/flink
>
>
> Zakelly Lan  于2025年4月7日周一 19:44写道:
>
> > +1 (binding)
> >
> > I have verified:
> >
> >  - Checksum and signature
> >  - There are no binaries in the source archive
> >  - Release tag and staging jars
> >  - Built from source
> >  - Release notes and web PR
> >
> >
> > Best,
> > Zakelly
> >
> > On Thu, Apr 3, 2025 at 12:18 PM weijie guo 
> > wrote:
> >
> > > Hi everyone,
> > >
> > >
> > > Please review and vote on the release candidate #1 for v3.1.0, as
> > follows:
> > >
> > > [ ] +1, Approve the release
> > >
> > > [ ] -1, Do not approve the release (please provide specific comments)
> > >
> > >
> > > This release supports Flink 1.18, 1.19 and 1.20.
> > >
> > >
> > > The complete staging area is available for your review, which includes:
> > >
> > > * JIRA release notes [1],
> > >
> > > * the official Apache source release to be deployed to dist.apache.org
> > > [2],
> > >
> > > which are signed with the key with fingerprint
> > > 8D56AE6E7082699A4870750EA4E8C4C05EE6861F [3],
> > >
> > > * all artifacts to be deployed to the Maven Central Repository [4],
> > >
> > > * source code tag v3.1.0-rc1 [5],
> > >
> > > * website pull request listing the new release [6].
> > >
> > > * CI build of tag [7].
> > >
> > >
> > > The vote will be open for at least 72 hours. It is adopted by majority
> > >
> > > approval, with at least 3 PMC affirmative votes.
> > >
> > >
> > > Thanks,
> > >
> > > Weijie
> > >
> > >
> > > [1]
> > >
> > >
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12352520
> > >
> > > [2]
> > >
> > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-elasticsearch-3.1.0-rc1/
> > >
> > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > >
> > > [4]
> > >
> https://repository.apache.org/content/repositories/orgapacheflink-1795/
> > >
> > > [5]
> > >
> > >
> > >
> >
> https://github.com/apache/flink-connector-elasticsearch/releases/tag/v3.1.0-rc1
> > >
> > > [6] https://github.com/apache/flink-web/pull/785
> > >
> > > [7]
> > >
> > >
> > >
> >
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/14210840013
> > >
> >
>


[RESULT][VOTE] FLIP-515: Dynamic Kafka Sink

2025-04-10 Thread Őrhidi Mátyás
Hi devs,

The vote[1] on FLIP-515[2] concluded with the following results:

Approving(+1) votes:

Gyula Fora  (binding)
Gabor Somogyi (binding)
Maximilian Michels  (binding)
Alexander Fedulov  (binding)
Thomas Weise (binding)

There were no disproving(-1) votes.

I'm happy to announce that FLIP-515 has been approved.

Thanks,
Matyas

[1] https://lists.apache.org/thread/w3ry1v5jvk2rgkpfyx4271ld47dq54h7
[2]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-515:+Dynamic+Kafka+Sink


[jira] [Created] (FLINK-37652) [Connectors/Opensearch] Move to Flink to 2.0.0

2025-04-10 Thread Andriy Redko (Jira)
Andriy Redko created FLINK-37652:


 Summary: [Connectors/Opensearch] Move to Flink to 2.0.0
 Key: FLINK-37652
 URL: https://issues.apache.org/jira/browse/FLINK-37652
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Opensearch
Reporter: Andriy Redko
 Fix For: 3.0.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-37653) Incremental snapshot framework supports assigning multiple StreamSplit

2025-04-10 Thread yux (Jira)
yux created FLINK-37653:
---

 Summary: Incremental snapshot framework supports assigning 
multiple StreamSplit
 Key: FLINK-37653
 URL: https://issues.apache.org/jira/browse/FLINK-37653
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Reporter: yux


Currently, HybridSplitAssigners and StreamSplitAssigners in incremental 
snapshot framework implicitly assumes that there will be at most one unbounded 
stream split, which isn't true for some data sources that supports multi change 
stream, like MongoDB and PolarDB.

This change should extend assigners' API without changing existing data 
sources' behavior.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] FLIP-519: Introduce async lookup key ordered mode

2025-04-10 Thread shuai xu
Hi Zakelly,

Thank you for your response and for taking responsibility for generalizing the 
functionality of the 'Asynchronous Execution Model' (AEC). As we discussed 
earlier, the subsequent work on FLIP will be based on operators that support 
AEC. If you need any further discussion, please feel free to reach out to me 
directly.

> 2025年4月10日 12:31,Zakelly Lan  写道:
> 
> Hi all,
> 
> I have also added a 'Follow up' section at the end of the FLIP-425[1]
> describing this.
> 
> [1] https://cwiki.apache.org/confluence/x/S4p3EQ
> 
> 
> Best,
> Zakelly
> 
> On Wed, Apr 9, 2025 at 12:42 PM Zakelly Lan  wrote:
> 
>> Thanks for driving this!
>> 
>> +1 for the FLIP given there is a solid user case behind.
>> 
>> Shuai and I had a discussion and we agree that `KeyedAsyncWaitOperator`
>> in current FLIP shares similar functionality with the `Asynchronous
>> Execution Model (AEC)` introduced in FLIP-425[1]. We think it is better to
>> generalize the AEC for all keyed ordered cases, not only for state access.
>> So I'd make this happen after the approval of this FLIP. Hope this helps
>> all the similar operators to implement.
>> 
>> [1] https://cwiki.apache.org/confluence/x/S4p3EQ
>> 
>> 
>> Best,
>> Zakelly
>> 
>> On Tue, Apr 8, 2025 at 10:00 AM shuai xu  wrote:
>> 
>>> Hi devs,
>>> 
>>> I'd like to start a discussion on FLIP-519: Introduce async lookup key
>>> ordered mode[1].
>>> 
>>> The Flink system currently supports both record-level ordered and
>>> unordered output modes for asynchronous lookup joins. However, it does
>>> not guarantee the processing order of records sharing the same key.
>>> 
>>> As highlighted in [2], there are two key requirements for enhancing
>>> async io operations:
>>> 1. Ensuring the processing order of records with the same key is a
>>> common requirement in DataStream.
>>> 2. Sequential processing of records sharing the same upsertKey when
>>> performing lookup join in Flink SQL is essential for maintaining
>>> correctness.
>>> 
>>> This optimization aims to balance correctness and performance for
>>> stateful streaming workloads.Then the FLIP introduce a new operator
>>> KeyedAsyncWaitOperator to supports the optimization. Besides, a new
>>> option is added to control the behaviour avoid influencing existing
>>> jobs.
>>> 
>>> please find more details in the FLIP wiki document[1]. Looking forward
>>> to your feedback.
>>> 
>>> [1]
>>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-519%3A++Introduce+async+lookup+key+ordered+mode
>>> [2] https://lists.apache.org/thread/wczzjhw8g0jcbs8lw2jhtrkw858cmx5n
>>> 
>>> Best,
>>> Xu Shuai
>>> 
>> 



Re: [DISCUSS] Planning Flink 2.1

2025-04-10 Thread Leonard Xu
+1 for the final release manager, looking forward to Flink 2.1.

Best,
Leonard

> 2025 4月 11 11:43,Ron Liu  写道:
> 
> Hi everyone,
> 
> The discussion has been ongoing for a long time, and there is currently 1
> RM candidate, so the final release manager of 2.1 is Ron Liu. We will do
> the first release sync on April 23, 2025, at 9 am (UTC+2) and 3 pm (UTC+8).
> Welcome to the meeting!
> 
> Best regards,
> Ron
> 
> Ron Liu  于2025年3月26日周三 11:44写道:
> 
>> Hi, David
>> 
>> Thanks for your kind reminder, we are planning the next minor release 2.1.
>> 
>> Thanks also to Xingtong for additional background information.
>> 
>> Best,
>> Ron
>> 
>> Xintong Song  于2025年3月26日周三 11:08写道:
>> 
>>> Thanks for the kick off and volunteering as release manager, Ron.
>>> 
>>> +1 for start preparing release 2.1 and Ron as the release manager. The
>>> proposed feature freeze date sounds good to me.
>>> 
>>> @David,
>>> 
>>> Just to provide some background information.
>>> 
>>> In the Flink community, we use the terminologies Major, Minor and Bugfix
>>> releases, represented by the three digits in the version number
>>> respectively. E.g., 1.20.1 represents Major version 1, Minor version 20
>>> and
>>> Bugfix version 1.
>>> 
>>> The major differences are:
>>> 1. API compatibility: @Public APIs are guaranteed compatible across
>>> different Minor and Bugfix releases of the same Major
>>> version. @PublicEvolving APIs are guaranteed compatible across Bugfix
>>> releases of the same Minor version.
>>> 2. Bugfix releases typically only include bugfixes, but not feature /
>>> improvement code changes. Major and Minor releases include both feature /
>>> improvements and bugfixes.
>>> 
>>> The Flink community applies a time-based release planning [1] for Major /
>>> Minor releases. We try to deliver new features with a new release
>>> (typically Minor, unless we decide to break the API compatibility with a
>>> Major release) roughly every 4 months. As for bugfix releases, they are
>>> usually planned on-demand. E.g., to make a critical bugfix available to
>>> users, or observed significant bugfix issues have been merged for the
>>> release branch.
>>> 
>>> Best,
>>> 
>>> Xintong
>>> 
>>> 
>>> [1] https://cwiki.apache.org/confluence/display/FLINK/Time-based+releases
>>> 
>>> 
>>> 
>>> On Wed, Mar 26, 2025 at 12:14 AM David Radley 
>>> wrote:
>>> 
 Hi ,
 I am wondering what the thinking is about calling the next release 2.1
 rather than 2.0.1.  This numbering implies to me you are considering a
 minor release rather than a point release. For example
 https://commons.apache.org/releases/versioning.html,
 
Kind regards, David.
 
 
 From: Jark Wu 
 Date: Tuesday, 25 March 2025 at 06:52
 To: dev@flink.apache.org 
 Subject: [EXTERNAL] Re: [DISCUSS] Planning Flink 2.1
 Thanks, Ron Liu for kicking off the 2.1 release.
 
 +1 for Ron Liu to be the release manager and
 +1 for the feature freeze date
 
 Best,
 Jark
 
 On Mon, 24 Mar 2025 at 16:33, Ron Liu  wrote:
 
> Hi everyone,
> With the release announcement of Flink 2.0, it's a good time to kick
>>> off
> discussion of the next release 2.1.
> 
> - Release Managers
> 
> I'd like to volunteer as one of the release managers this time. It has
 been
> good practice to have a team of release managers from different
> backgrounds, so please raise you hand if you'd like to volunteer and
>>> get
> involved.
> 
> 
> - Timeline
> 
> Flink 2.0 has been released. With a target release cycle of 4 months,
> we propose a feature freeze date of *June 21, 2025*.
> 
> 
> - Collecting Features
> 
> As usual, we've created a wiki page[1] for collecting new features in
 2.1.
> 
> In the meantime, the release management team will be finalized in the
 next
> few days, and we'll continue to create Jira Boards and Sync meetings
> to make it easy for everyone to get an overview and track progress.
> 
> 
> Best,
> Ron
> 
> [1] https://cwiki.apache.org/confluence/display/FLINK/2.1+Release
> 
 
 Unless otherwise stated above:
 
 IBM United Kingdom Limited
 Registered in England and Wales with number 741598
 Registered office: Building C, IBM Hursley Office, Hursley Park Road,
 Winchester, Hampshire SO21 2JN
 
>>> 
>> 



Re: [DISCUSS] FLIP-519: Introduce async lookup key ordered mode

2025-04-10 Thread shuai xu
Hi all,

This FLIP will primarily focus on the implementation within the table module. 
As for support in the DataStream API, it will be addressed in a separate FLIP.

> 2025年4月8日 09:57,shuai xu  写道:
> 
> Hi devs,
> 
> I'd like to start a discussion on FLIP-519: Introduce async lookup key
> ordered mode[1].
> 
> The Flink system currently supports both record-level ordered and
> unordered output modes for asynchronous lookup joins. However, it does
> not guarantee the processing order of records sharing the same key.
> 
> As highlighted in [2], there are two key requirements for enhancing
> async io operations:
> 1. Ensuring the processing order of records with the same key is a
> common requirement in DataStream.
> 2. Sequential processing of records sharing the same upsertKey when
> performing lookup join in Flink SQL is essential for maintaining
> correctness.
> 
> This optimization aims to balance correctness and performance for
> stateful streaming workloads.Then the FLIP introduce a new operator
> KeyedAsyncWaitOperator to supports the optimization. Besides, a new
> option is added to control the behaviour avoid influencing existing
> jobs.
> 
> please find more details in the FLIP wiki document[1]. Looking forward
> to your feedback.
> 
> [1] 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-519%3A++Introduce+async+lookup+key+ordered+mode
> [2] https://lists.apache.org/thread/wczzjhw8g0jcbs8lw2jhtrkw858cmx5n
> 
> Best,
> Xu Shuai



Re: [DISCUSS] Planning Flink 2.1

2025-04-10 Thread Ron Liu
Hi everyone,

The discussion has been ongoing for a long time, and there is currently 1
RM candidate, so the final release manager of 2.1 is Ron Liu. We will do
the first release sync on April 23, 2025, at 9 am (UTC+2) and 3 pm (UTC+8).
Welcome to the meeting!

Best regards,
Ron

Ron Liu  于2025年3月26日周三 11:44写道:

> Hi, David
>
> Thanks for your kind reminder, we are planning the next minor release 2.1.
>
> Thanks also to Xingtong for additional background information.
>
> Best,
> Ron
>
> Xintong Song  于2025年3月26日周三 11:08写道:
>
>> Thanks for the kick off and volunteering as release manager, Ron.
>>
>> +1 for start preparing release 2.1 and Ron as the release manager. The
>> proposed feature freeze date sounds good to me.
>>
>> @David,
>>
>> Just to provide some background information.
>>
>> In the Flink community, we use the terminologies Major, Minor and Bugfix
>> releases, represented by the three digits in the version number
>> respectively. E.g., 1.20.1 represents Major version 1, Minor version 20
>> and
>> Bugfix version 1.
>>
>> The major differences are:
>> 1. API compatibility: @Public APIs are guaranteed compatible across
>> different Minor and Bugfix releases of the same Major
>> version. @PublicEvolving APIs are guaranteed compatible across Bugfix
>> releases of the same Minor version.
>> 2. Bugfix releases typically only include bugfixes, but not feature /
>> improvement code changes. Major and Minor releases include both feature /
>> improvements and bugfixes.
>>
>> The Flink community applies a time-based release planning [1] for Major /
>> Minor releases. We try to deliver new features with a new release
>> (typically Minor, unless we decide to break the API compatibility with a
>> Major release) roughly every 4 months. As for bugfix releases, they are
>> usually planned on-demand. E.g., to make a critical bugfix available to
>> users, or observed significant bugfix issues have been merged for the
>> release branch.
>>
>> Best,
>>
>> Xintong
>>
>>
>> [1] https://cwiki.apache.org/confluence/display/FLINK/Time-based+releases
>>
>>
>>
>> On Wed, Mar 26, 2025 at 12:14 AM David Radley 
>> wrote:
>>
>> > Hi ,
>> > I am wondering what the thinking is about calling the next release 2.1
>> > rather than 2.0.1.  This numbering implies to me you are considering a
>> > minor release rather than a point release. For example
>> > https://commons.apache.org/releases/versioning.html,
>> >
>> > Kind regards, David.
>> >
>> >
>> > From: Jark Wu 
>> > Date: Tuesday, 25 March 2025 at 06:52
>> > To: dev@flink.apache.org 
>> > Subject: [EXTERNAL] Re: [DISCUSS] Planning Flink 2.1
>> > Thanks, Ron Liu for kicking off the 2.1 release.
>> >
>> > +1 for Ron Liu to be the release manager and
>> > +1 for the feature freeze date
>> >
>> > Best,
>> > Jark
>> >
>> > On Mon, 24 Mar 2025 at 16:33, Ron Liu  wrote:
>> >
>> > > Hi everyone,
>> > > With the release announcement of Flink 2.0, it's a good time to kick
>> off
>> > > discussion of the next release 2.1.
>> > >
>> > > - Release Managers
>> > >
>> > > I'd like to volunteer as one of the release managers this time. It has
>> > been
>> > > good practice to have a team of release managers from different
>> > > backgrounds, so please raise you hand if you'd like to volunteer and
>> get
>> > > involved.
>> > >
>> > >
>> > > - Timeline
>> > >
>> > > Flink 2.0 has been released. With a target release cycle of 4 months,
>> > > we propose a feature freeze date of *June 21, 2025*.
>> > >
>> > >
>> > > - Collecting Features
>> > >
>> > > As usual, we've created a wiki page[1] for collecting new features in
>> > 2.1.
>> > >
>> > > In the meantime, the release management team will be finalized in the
>> > next
>> > > few days, and we'll continue to create Jira Boards and Sync meetings
>> > > to make it easy for everyone to get an overview and track progress.
>> > >
>> > >
>> > > Best,
>> > > Ron
>> > >
>> > > [1] https://cwiki.apache.org/confluence/display/FLINK/2.1+Release
>> > >
>> >
>> > Unless otherwise stated above:
>> >
>> > IBM United Kingdom Limited
>> > Registered in England and Wales with number 741598
>> > Registered office: Building C, IBM Hursley Office, Hursley Park Road,
>> > Winchester, Hampshire SO21 2JN
>> >
>>
>


[VOTE] Release flink-connector-elasticsearch v4.0.0, release candidate #2

2025-04-10 Thread weijie guo
Hi everyone,




Please review and vote on the release candidate #2 for v4.0.0, as follows:


[ ] +1, Approve the release


[ ] -1, Do not approve the release (please provide specific comments)




This release is mainly for flink 2.0.




The complete staging area is available for your review, which includes:


* JIRA release notes [1],


* the official Apache source release to be deployed to dist.apache.org [2],


which are signed with the key with fingerprint
8D56AE6E7082699A4870750EA4E8C4C05EE6861F [3],


* all artifacts to be deployed to the Maven Central Repository [4],


* source code tag v4.0.0-rc2 [5],


* website pull request listing the new release [6].


* CI build of tag [7].




The vote will be open for at least 72 hours. It is adopted by majority


approval, with at least 3 PMC affirmative votes.




Thanks,


Weijie



[1]


https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12355810


[2]


https://dist.apache.org/repos/dist/dev/flink/flink-connector-elasticsearch-4.0.0-rc2/


[3] https://dist.apache.org/repos/dist/release/flink/KEYS


[4] https://repository.apache.org/content/repositories/orgapacheflink-1801/


[5]


https://github.com/apache/flink-connector-elasticsearch/releases/tag/v4.0.0-rc2


[6] https://github.com/apache/flink-web/pull/786


[7]


https://github.com/apache/flink-connector-elasticsearch/actions/runs/14394613238


Re: [VOTE] Release flink-connector-kafka v4.0.0, release candidate #2

2025-04-10 Thread Yanquan Lv
Yes, I agree with your viewpoint. I created an issue[1] to trace it, and I
think we can fix it as soon as possible to start a new rc.
What's more, I believe that our current e2e testing does not cover such
scenarios, which we need to supplement in the future.

For the 3.4 release, I think there should also be this issue, but since no
one has provided feedback, I think we can wait until adding related e2e
test before considering a new release, depending on the feedback from the
community.

[1] https://issues.apache.org/jira/browse/FLINK-37644

Arvid Heise  于2025年4月10日周四 04:02写道:

> Hi Yanquan,
>
> I think you are completely right. This current state doesn't work. So I'm
> calling RC2 off until we fix this issue.
>
> We added one code path in 3.4 (!) that made this issue probably more
> obvious but the unchecked use of guava has been going on since 2 years. We
> have more than 10 instances where we use Guava classes, especially in the
> DynamicKafkaSource.
> Would you please create a ticket for that?
>
> In general, we can't use flink-shaded-guava if we plan to support multiple
> minor Flink version with the same source. I expect this to happen again for
> later 2.X releases.
> We also can't simply include guava as is because it may clash with another
> connector's use of guava.
> So the most logical option to me is to shade it into the base module
> flink-kafka-connector and relocate it to a unique prefix
> (e.g. org.apache.flink.connector.kafka).
> Then SQL module will simply use that relocation transitively.
> WDYT?
>
> Best,
>
> Arvid
>
> On Wed, Apr 9, 2025 at 6:38 PM Yanquan Lv  wrote:
>
> > I checked:
> > - Review JIRA release notes
> > - Verify hashes and verify signatures
> > - Build success from source with JDK17 & maven3.8.6
> > - Source code artifacts matching the current release
> > - Read the announcement blog
> > - Verify that the major version is 55
> >
> >
> > I am trying to use the jar flink-sql-connector-kafka-4.0.0-2.0.jar[1] for
> > datagen to kafka test. However, I got `java.lang.ClassNotFoundException:
> > com.google.common.reflect.TypeToken` error message,  I've checked that
> this
> > class was not included in this release jar and we did use unshaded guava
> > dependencies, I think this is problematic.
> > Maybe others can help confirm if this issue exists.
> >
> >
> > [1]
> >
> >
> https://repository.apache.org/content/repositories/staging/org/apache/flink/flink-sql-connector-kafka/4.0.0-2.0/
> >
> > Arvid Heise  于2025年4月9日周三 16:17写道:
> >
> > > Hi everyone,
> > > Please review and vote on release candidate #2 for
> flink-connector-kafka
> > > v4.0.0, as follows:
> > > [ ] +1, Approve the release
> > > [ ] -1, Do not approve the release (please provide specific comments)
> > >
> > >
> > > The complete staging area is available for your review, which includes:
> > > * JIRA release notes [1],
> > > * the official Apache source release to be deployed to dist.apache.org
> > > [2],
> > > which are signed with the key with fingerprint 538B49E9BCF0B72F [3],
> > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > * source code tag v4.0.0-rc2 [5],
> > > * website pull request listing the new release [6].
> > > * CI build of the tag [7].
> > >
> > > The vote will be open for at least 72 hours. It is adopted by majority
> > > approval, with at least 3 PMC affirmative votes.
> > >
> > > Thanks,
> > > Release Manager
> > >
> > > [1]
> > >
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12352917
> > > [2]
> > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-kafka-4.0.0-rc2
> > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > [4]
> > >
> > >
> >
> https://repository.apache.org/content/repositories/staging/org/apache/flink/flink-connector-kafka/4.0.0-2.0/
> > > [5]
> > >
> https://github.com/apache/flink-connector-kafka/releases/tag/v4.0.0-rc2
> > > [6] https://github.com/apache/flink-web/pull/787
> > > [7]
> > >
> > >
> >
> https://github.com/apache/flink-connector-kafka/actions/runs/14334379006/job/40177587780
> > >
> >
>


Re: [VOTE] Release flink-connector-kudu v2.0.0, release candidate #1

2025-04-10 Thread Mate Czagany
+1 (non-binding)

- Verified signature and checksum
- Checked source does not contain any binaries
- Verified build successful from source
- Checked NOTICE files
- Checked Jira tickets in release
- Checked website PR

Thank you,
Mate

Zakelly Lan  ezt írta (időpont: 2025. ápr. 7., H,
17:21):

> +1 (binding)
>
> I have verified:
>  - Signatures and checksum
>  - There are no binaries in the source archive
>  - Release tag and staging jars
>  - Built from source
>  - Web PR and release notes
>
>
> Best,
> Zakelly
>
>
>
>
> On Sat, Apr 5, 2025 at 11:22 PM Gyula Fóra  wrote:
>
> > +1 (binding)
> >
> > Verified:
> >  - Checkpoints, signatures
> >  - Checked notice files + no binaries in source release
> >  - Built from source
> >  - Verified release tag, release notes and website PR
> >
> > Cheers
> > Gyula
> >
> > On Fri, Mar 28, 2025 at 2:34 PM Ferenc Csaky  >
> > wrote:
> >
> > > Hi everyone,
> > >
> > > Please review and vote on release candidate #1 for
> > > flink-connector-kudu v2.0.0, as follows:
> > > [ ] +1, Approve the release
> > > [ ] -1, Do not approve the release (please provide specific comments)
> > >
> > >
> > > The complete staging area is available for your review, which
> > > includes:
> > > * JIRA release notes [1],
> > > * the official Apache source release to be deployed to
> > >   dist.apache.org [2], which are signed with the key with
> > >   fingerprint 16AE0DDBBB2F380B [3],
> > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > * source code tag v2.0.0-rc1 [5],
> > > * website pull request listing the new release [6],
> > > * CI build of the tag [7].
> > >
> > > The vote will be open for at least 72 hours. It is adopted by
> > > majority approval, with at least 3 PMC affirmative votes.
> > >
> > > Thanks,
> > > Ferenc
> > >
> > > [1]
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354673
> > > [2]
> > >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-kudu-2.0.0-rc1
> > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > [4]
> > >
> https://repository.apache.org/content/repositories/orgapacheflink-1793/
> > > [5]
> > https://github.com/apache/flink-connector-kudu/releases/tag/v2.0.0-rc1
> > > [6] https://github.com/apache/flink-web/pull/784
> > > [7]
> > >
> https://github.com/apache/flink-connector-kudu/actions/runs/14128565162
> > >
> >
>


Re: [VOTE] Release flink-connector-elasticsearch v4.0.0, release candidate #1

2025-04-10 Thread weijie guo
Thanks all

This release candidate has been officially cancelled. I will fix the
problems with NOTICE and target jdk version in rc2.

Best regards,

Weijie


Yanquan Lv  于2025年4月9日周三 18:04写道:

> Hi, Weijie.
>
> Sorry for reply again, But I found that the major version of the release
> jar is 52 using `javap -classpath
> flink-sql-connector-elasticsearch7-4.0.0-2.0.jar -verbose
> org.apache.flink.connector.elasticsearch.Elasticsearch7ApiCallBridge | grep
> version`, Is this not as expected? Do we need to set the target version to
> Java11 too.
>
> weijie guo  于2025年4月3日周四 14:37写道:
>
> > Hi everyone,
> >
> >
> > Please review and vote on the release candidate #1 for v4.0.0, as
> follows:
> >
> > [ ] +1, Approve the release
> >
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> >
> > This release is mainly for flink 2.0.
> >
> >
> > The complete staging area is available for your review, which includes:
> >
> > * JIRA release notes [1],
> >
> > * the official Apache source release to be deployed to dist.apache.org
> > [2],
> >
> > which are signed with the key with fingerprint
> > 8D56AE6E7082699A4870750EA4E8C4C05EE6861F [3],
> >
> > * all artifacts to be deployed to the Maven Central Repository [4],
> >
> > * source code tag v4.0.0-rc1 [5],
> >
> > * website pull request listing the new release [6].
> >
> > * CI build of tag [7].
> >
> >
> > The vote will be open for at least 72 hours. It is adopted by majority
> >
> > approval, with at least 3 PMC affirmative votes.
> >
> >
> > Thanks,
> >
> > Weijie
> >
> > [1]
> >
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12355810
> >
> > [2]
> >
> >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-elasticsearch-4.0.0-rc1/
> >
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> >
> > [4]
> > https://repository.apache.org/content/repositories/orgapacheflink-1796/
> >
> > [5]
> >
> >
> >
> https://github.com/apache/flink-connector-elasticsearch/releases/tag/v4.0.0-rc1
> >
> > [6] https://github.com/apache/flink-web/pull/786
> >
> > [7]
> >
> >
> >
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/14215074324
> >
>


Re: [VOTE] Release flink-connector-kafka v4.0.0, release candidate #1

2025-04-10 Thread Tom Cooper


Given that Flink 2.0.0 compiled with Java 17 and used a source version of Java 
11 [1], we should do the same with the Kafka Connector?

Tom Cooper
@tomcooper.dev | https://tomcooper.dev

[1] 
https://github.com/apache/flink/blob/14e85eced10e98bc75870ac0360bad67d0722697/pom.xml#L127

On Wednesday, 9 April 2025 at 10:35, Xiqian YU  wrote:

> Hi Arvid,
> 
> I think what Kunni is mentioning is the compiled JDK version (JDK 17 here), 
> not the targeted class file version (JDK 8 here). The target version could be 
> lower than the JDK version used.
> 
> Actual target version could be determined by executing `javap -verbose 
> X.class | grep version`, while the compiled JDK version could be found at 
> META-INF/MANIFEST.MF file.
> 
> Best Regards,
> Xiqian
> 
> > 2025年4月9日 16:19,Arvid Heise ahe...@confluent.io.INVALID 写道:
> > 
> > I have started another thread. Sources are unchanged and I just updated the
> > binaries.
> > 
> > However, I found something odd. The release scripts currently hard code
> > Java 8 as a target. So I'm not sure how the previous binary was JDK 17.
> > Unfortunately, I deleted the binaries before checking. @Yanquan Lv
> > decq12y...@gmail.com , could you double check? Maybe it was major 52?
> > 
> > Anyhow, the new binaries are major 55. I will contribute my updated release
> > scripts later today.
> > 
> > Best,
> > 
> > Arvid
> > 
> > On Wed, Apr 9, 2025 at 6:41 AM Arvid Heise ar...@apache.org wrote:
> > 
> > > Hi Yanquan,
> > > 
> > > Thank you for noticing that. I do think we should support 11 and thus I
> > > will recompile it.
> > > 
> > > I'm retracting RC1 and will build RC2 soonish.
> > > 
> > > Best,
> > > 
> > > Arvid
> > > 
> > > On Wed, Apr 9, 2025, 06:22 Yanquan Lv decq12y...@gmail.com wrote:
> > > 
> > > > Hi, Arvid. Thanks for your efforts in this release.
> > > > 
> > > > I noticed that this released version was compiled using JDK 17, and 
> > > > there
> > > > is also flink-connector-elasticsearch[1] support for Flink 2.0 release,
> > > > which is compiled using JDK 11.
> > > > I checked the compiled version of Flink 2.0 release, which is also 
> > > > jdk11,
> > > > so it would be better for us to use this version.
> > > > Our CI has already covered version 11, but I think clarifying the 
> > > > compiled
> > > > version from the beginning will help build standards, What do you think?
> > > > 
> > > > [1] https://lists.apache.org/thread/y9wthpvtnl18hbb3rq5fjbx8g8k6wv2m
> > > > 
> > > > Arvid Heise ar...@apache.org 于2025年4月8日周二 22:09写道:
> > > > 
> > > > > Hi everyone,
> > > > > Please review and vote on release candidate #1 for 
> > > > > flink-connector-kafka
> > > > > v4.0.0, as follows:
> > > > > [ ] +1, Approve the release
> > > > > [ ] -1, Do not approve the release (please provide specific comments)
> > > > > 
> > > > > The complete staging area is available for your review, which 
> > > > > includes:
> > > > > * JIRA release notes [1],
> > > > > * the official Apache source release to be deployed to dist.apache.org
> > > > > [2],
> > > > > which are signed with the key with fingerprint 538B49E9BCF0B72F [3],
> > > > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > > > * source code tag v4.0.0-rc1 [5],
> > > > > * website pull request listing the new release [6].
> > > > > * CI build of the tag [7].
> > > > > 
> > > > > The vote will be open for at least 72 hours. It is adopted by majority
> > > > > approval, with at least 3 PMC affirmative votes.
> > > > > 
> > > > > Thanks,
> > > > > Arvid
> > > > > 
> > > > > [1]
> > > > 
> > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12352917
> > > > 
> > > > > [2]
> > > > 
> > > > https://dist.apache.org/repos/dist/dev/flink/flink-connector-kafka-4.0.0-rc1
> > > > 
> > > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > > > [4]
> > > > 
> > > > https://repository.apache.org/content/repositories/staging/org/apache/flink/flink-connector-kafka/4.0.0-2.0/
> > > > 
> > > > > [5]
> > > > > https://github.com/apache/flink-connector-kafka/releases/tag/v4.0.0-rc1
> > > > > [6] https://github.com/apache/flink-web/pull/787
> > > > > [7]
> > > > 
> > > > https://github.com/apache/flink-connector-kafka/actions/runs/14334379006/job/40177587780


Re: [VOTE] FLIP-515: Dynamic Kafka Sink

2025-04-10 Thread Maximilian Michels
+1 (binding)

Cheers,
Max

On Thu, Apr 3, 2025 at 2:04 PM Gabor Somogyi  wrote:
>
> +1 (binding)
>
> BR,
> G
>
>
> On Thu, Apr 3, 2025 at 11:49 AM  wrote:
>
> > +1 (binding)
> >
> > Gyula
> > Sent from my iPhone
> >
> > > On 3 Apr 2025, at 00:32, Őrhidi Mátyás  wrote:
> > >
> > > Hi devs,
> > >
> > > I would like to start the vote for FLIP-515: Dynamic Kafka Sink [1]
> > >
> > > This FLIP was discussed in this thread [2].
> > >
> > > The vote will be open for at least 72 hours unless there is an objection
> > or
> > > insufficient votes.
> > >
> > > [1]
> > >
> > https://cwiki.apache.org/confluence/display/FLINK/FLIP-515:+Dynamic+Kafka+Sink
> > >
> > > [2] https://lists.apache.org/thread/n03bc8o53yj5llnr5xhcnqdxr0goxm5v
> > >
> > > Thanks,
> > > Matyas Orhidi
> >


[jira] [Created] (FLINK-37628) Wrong reference counting in ForSt file cache

2025-04-10 Thread Zakelly Lan (Jira)
Zakelly Lan created FLINK-37628:
---

 Summary: Wrong reference counting in ForSt file cache
 Key: FLINK-37628
 URL: https://issues.apache.org/jira/browse/FLINK-37628
 Project: Flink
  Issue Type: Bug
  Components: Runtime / State Backends
Reporter: Zakelly Lan
Assignee: Zakelly Lan


There is a concurrency issue for reference counting in ForSt file cache, which 
could lead to a read error in some special scenarios (e.g. extremely frequent 
cache thrashing)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release flink-connector-kudu v2.0.0, release candidate #1

2025-04-10 Thread Xiqian YU
+1 (non-binding)

Checklist: 

* Verified tarball signature and checksum
* Compiled code from source with JDK 17
* Ran test cases locally, all passed
* Reviewed license and third-party notice

Best Regards,
Xiqian


> 2025年4月8日 17:35,Mate Czagany  写道:
> 
> +1 (non-binding)
> 
> - Verified signature and checksum
> - Checked source does not contain any binaries
> - Verified build successful from source
> - Checked NOTICE files
> - Checked Jira tickets in release
> - Checked website PR
> 
> Thank you,
> Mate
> 
> Zakelly Lan  ezt írta (időpont: 2025. ápr. 7., H,
> 17:21):
> 
>> +1 (binding)
>> 
>> I have verified:
>> - Signatures and checksum
>> - There are no binaries in the source archive
>> - Release tag and staging jars
>> - Built from source
>> - Web PR and release notes
>> 
>> 
>> Best,
>> Zakelly
>> 
>> 
>> 
>> 
>> On Sat, Apr 5, 2025 at 11:22 PM Gyula Fóra  wrote:
>> 
>>> +1 (binding)
>>> 
>>> Verified:
>>> - Checkpoints, signatures
>>> - Checked notice files + no binaries in source release
>>> - Built from source
>>> - Verified release tag, release notes and website PR
>>> 
>>> Cheers
>>> Gyula
>>> 
>>> On Fri, Mar 28, 2025 at 2:34 PM Ferenc Csaky >> 
>>> wrote:
>>> 
 Hi everyone,
 
 Please review and vote on release candidate #1 for
 flink-connector-kudu v2.0.0, as follows:
 [ ] +1, Approve the release
 [ ] -1, Do not approve the release (please provide specific comments)
 
 
 The complete staging area is available for your review, which
 includes:
 * JIRA release notes [1],
 * the official Apache source release to be deployed to
  dist.apache.org [2], which are signed with the key with
  fingerprint 16AE0DDBBB2F380B [3],
 * all artifacts to be deployed to the Maven Central Repository [4],
 * source code tag v2.0.0-rc1 [5],
 * website pull request listing the new release [6],
 * CI build of the tag [7].
 
 The vote will be open for at least 72 hours. It is adopted by
 majority approval, with at least 3 PMC affirmative votes.
 
 Thanks,
 Ferenc
 
 [1]
 
>>> 
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354673
 [2]
 
>>> 
>> https://dist.apache.org/repos/dist/dev/flink/flink-connector-kudu-2.0.0-rc1
 [3] https://dist.apache.org/repos/dist/release/flink/KEYS
 [4]
 
>> https://repository.apache.org/content/repositories/orgapacheflink-1793/
 [5]
>>> https://github.com/apache/flink-connector-kudu/releases/tag/v2.0.0-rc1
 [6] https://github.com/apache/flink-web/pull/784
 [7]
 
>> https://github.com/apache/flink-connector-kudu/actions/runs/14128565162
 
>>> 
>>