FLIP as small as possible. If there is a good reason
> to expose the WatermarkStatus, then we can probably do it.
>
> Cheers,
> Till
>
> On Fri, Jul 30, 2021 at 2:29 PM Arvid Heise
> wrote:
>
>
> Hi Martijn,
>
> 1. Good question. The watermarks and statuses o
Thank you!
On Tue, Aug 10, 2021 at 11:04 AM Jingsong Li wrote:
> Thanks Yun Tang and everyone!
>
> Best,
> Jingsong
>
> On Tue, Aug 10, 2021 at 9:37 AM Xintong Song
> wrote:
>
>> Thanks Yun and everyone~!
>>
>> Thank you~
>>
>> Xintong Song
>>
>>
>>
>> On Mon, Aug 9, 2021 at 10:14 PM Till Rohrm
Awesome! Thank you for driving this.
On Tue, Aug 10, 2021 at 11:45 AM Till Rohrmann wrote:
> This is great news. Thanks a lot for being our release manager Godfrey and
> also to everyone who made this release possible.
>
> Cheers,
> Till
>
> On Tue, Aug 10, 2021 at 11:09 AM godfrey he wrote:
>
Thank you.
On Tue, Aug 10, 2021 at 11:44 AM Till Rohrmann wrote:
> This is great news. Thanks a lot for being our release manager Jingsong
> and also to everyone who made this release possible.
>
> Cheers,
> Till
>
> On Tue, Aug 10, 2021 at 10:57 AM Jingsong Lee
> wrote:
>
>> The Apache Flink c
; > On Mon, Aug 9, 2021 at 12:08 PM Till Rohrmann
> > > wrote:
> > >
> > > > +1 (binding)
> > > >
> > > > Cheers,
> > > > Till
> > > >
> > > > On Thu, Aug 5, 2021 at 9:09 PM Arvid Heise wrote:
> >
Hi Thomas,
since the change didn't modify any existing classes, I'm weakly in favor of
backporting. My reluctance mainly stems from possible disappointments from
1.13 users that use an earlier bugfix level. So we need to make
documentation clear.
In general, I'm seeing connectors as something ext
cation jar. It does not require
> any addition to the runtime and therefore would work on any 1.13.x dist.
>
> For reference, I use it internally on top of 1.12.4.
>
> Thanks,
> Thomas
>
>
> On Mon, Aug 16, 2021 at 10:13 AM Arvid Heise wrote:
>
> > Hi Thoma
Dear devs,
we would like to merge these PRs after features freeze:
FLINK-23838: Add FLIP-33 metrics to new KafkaSink [1]
FLINK-23801: Add FLIP-33 metrics to KafkaSource [2]
FLINK-23640: Create a KafkaRecordSerializationSchemas builder [3]
All of the 3 PRs are smaller quality of life improvements
+1 (binding)
- Built from downloaded sources with Java 8 (mvn install -Prun-e2e-tests)
- Verified signatures and hashes
Best,
Arvid
On Thu, Aug 26, 2021 at 8:32 AM Tzu-Li (Gordon) Tai
wrote:
> +1 (binding)
>
> - Built from source with Java 11 and Java 8 (mvn clean install
> -Prun-e2e-tests)
>
Congratulations! New features look awesome.
On Wed, Sep 1, 2021 at 9:10 AM Till Rohrmann wrote:
> Great news! Thanks a lot for all your work on the new release :-)
>
> Cheers,
> Till
>
> On Wed, Sep 1, 2021 at 9:07 AM Johannes Moser wrote:
>
>> Congratulations, great job. 🎉
>>
>> On 31.08.2021,
my opinion, this limitation is perfectly fine for the MVP.
> > > > Watermark
> > > > > > > alignment is a long-standing issue and this already moves the
> > ball
> > > so
> > > > > far
> > > > > > > forward.
> > > &g
Thanks for starting the discussion. I think both issues are valid concerns
that we need to tackle. I guess the biggest issue is that now it's just not
possible to write 1 connector that runs for Flink 1.13 and 1.14, so we make
it much harder for devs in the ecosystem (and our goal is to make it
eas
ny idea
>> to
>> fix it. We should
>> fix it ASAP! Otherwise iceberg/hudi/cdc communities will get frustrated
>> again when upgrading
>> to 1.14. Maybe we still have time to release a minor version, because
>> there is no
>> connector upgraded to 1.14.0 yet.
t; > If we change it back, then a specific connector would work for 1.14.1
> and
> > 1.13.X but not for 1.14.0 and this would be even more confusing.
> > I think this is fine. IMO, this is a blocker issue of 1.14.0 which breaks
> > Source connectors.
> > We should sugges
Awesome!
On Tue, Oct 12, 2021 at 3:11 AM Guowei Ma wrote:
> Thanks for your effort!
>
> Best,
> Guowei
>
>
> On Mon, Oct 11, 2021 at 9:26 PM Stephan Ewen wrote:
>
> > Great initiative, thanks for doing this!
> >
> > On Mon, Oct 11, 2021 at 10:52 AM Till Rohrmann
> > wrote:
> >
> > > Thanks a l
Dear community,
Today I would like to kickstart a series of discussions around creating an
external connector repository. The main idea is to decouple the release
cycle of Flink with the release cycles of the connectors. This is a common
approach in other big data analytics projects and seems to s
You also must ensure that your SourceFunction is serializable, so it's not
enough to just refer to some classloader, you must ensure that you have
access to it also after deserialization on the task managers.
On Mon, Oct 18, 2021 at 4:24 AM Caizhi Weng wrote:
> Hi!
>
> There is only one classloa
e new established
>> flink-extended org might be another choice, but considering the amount of
>> connectors, I prefer to use an individual org for connectors to avoid
>> flushing other repos under flink-extended.
>>
>> In the meantime, we need to provide a well-esta
+1 (binding)
- build from source on scala 2_12 profile
- ran standalone cluster with examples
Best,
Arvid
On Tue, Oct 19, 2021 at 4:48 AM Dian Fu wrote:
> +1 (binding)
>
> - verified the checksum and signature
> - checked the dependency changes since 1.13.2. There is only one dependency
> cha
back to a previous version of the snapshot. Which
> > also means that checking out older commits can be problematic because
> > you'd still work against the latest snapshots, and they not be
> > compatible with each other.
> >
> >
> > On 18/10/2021 15:22, Arvid Heise wrote:
> > > I was actually betting on snapshots versions. What are the limits?
> > > Obviously, we can only do a release of a 1.15 connector after 1.15 is
> > > release.
> >
> >
>
o
> > > >> a
> > > >> > >> net improvement.
> > > >> > >>
> > > >> > >> It would be great if we can find a setup that allows for
> > connectors
> > > >> to
> > > >> > >> be r
Awesome. Thank you very much for all the hard work!
On Tue, Oct 26, 2021 at 1:06 AM Chesnay Schepler wrote:
> This time with proper formatting...
>
> flink-batch-sql-test
> flink-cep
> flink-cli-test
> flink-clients
> flink-connector-elasticsearch-base
> flink-connector-elasticsearch5
> flink-co
Hi folks,
thanks for the lively discussion. Let me present my point of view on a
couple of items:
*Impact on checkpointing times*
Currently, we send the committables of the writer downstream before the
barrier is sent. That allows us to include all committables in the state of
the committers, su
e no longer works after FLIP-147 when executing the
> final checkpoint. The problem is that the final checkpoint happens after
> the EOI and we would like to keep the property that you can terminate the
> whole topology with a single checkpoint, if possible.
>
> Cheers,
> Till
g range that fits "more frequent than Flink"
> (per-commit,
> > > > daily,
> > > > > > >> > weekly, bi-weekly, monthly, even bi-monthly).
> > > > > > >> >
> > > > > > >> > On 19/10/2021 14:15, Martij
y
> started with Junit4, then we chose to use Hamcrest because of its better
> expressiveness. Most recently, there was an effort started that aimed at
> switching over to Junit5 [1, 2]. @Arvid Heise knows
> more about the current status.
>
> Personally, I don't have a strong p
Hi everyone,
On behalf of the PMC, I'm very happy to announce Fabian Paul as a new Flink
committer.
Fabian Paul has been actively improving the connector ecosystem by
migrating Kafka and ElasticSearch to the Sink interface and is currently
driving FLIP-191 [1] to tackle the sink compaction issue.
ink community.
[1] https://github.com/apache/flink-connectors
[2] https://github.com/ververica/flink-cdc-connectors/
On Fri, Nov 12, 2021 at 3:39 PM Arvid Heise wrote:
> Hi everyone,
>
> I created the flink-connectors repo [1] to advance the topic. We would
> create a proof-of-concept in the ne
; > Yang
> >> > >
> >> > > Chesnay Schepler 于2020年4月29日周三 上午12:30写道:
> >> > >
> >> > > > Currently, processes started in the foreground (like in the case
> of
> >> > > > Docker) output all logging/stdout directly to t
;
> >>>>> Please check out the release blog post for an overview of the
> improvements for this bugfix release:
> >>>>> https://flink.apache.org/news/2020/05/12/release-1.10.1.html
> >>>>>
> >>>>> The full release notes are avai
uickly include this fix into
> the rc or do you think it is necessary to open a complete new one?
>
>
> [1] https://issues.apache.org/jira/browse/FLINK-18411 <
> https://issues.apache.org/jira/browse/FLINK-18411>
>
> Best,
> Fabian
--
Arvid Heise | Senior Ja
nting.unaligned
Thanks for helping us to improve this feature,
Arvid
--
Arvid Heise | Senior Java Developer
<https://www.ververica.com/>
Follow us @VervericaData
--
Join Flink Forward <https://flink-forward.org/> - The Apache Flink
Conference
Stream Processing | Event Driven | Re
iling list, or the release
> manager work.
>
> Congrats, Piotr!
>
> Best,
> Stephan
>
--
Arvid Heise | Senior Java Developer
<https://www.ververica.com/>
Follow us @VervericaData
--
Join Flink Forward <https://flink-forward.org/> - The Apache Flink
Confe
k.
>
> What happens when we have more slots than cores in following scenarios?
> 1) The transform is just changing of json format.
>
> 2) When the transformation is done by hitting another server (HTTP
> request)
>
> Thanks,
> Prasanna.
>
--
Arvid Heise | Senior Java De
release?
>
> Thanks,
> Thomas
>
>
> [1]
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-76%3A+Unaligned+Checkpoints
>
> On Wed, Oct 2, 2019 at 12:11 PM Arvid Heise wrote:
>
>> Sry incorrect link, please follow [1].
>>
>&
will have to
>> be follow-up FLIPs that describe the necessary changes for the APIs that
>> we maintain.
>> --
>>
>> Please let us know if you have any concerns or comments. Also, please
>> keep discussion to this ML thread instead of commenting in the Wiki
David has been giving numerous talks and trainings on
> > Flink.
> > > >On StackOverflow, he's among the most active helping Flink users to
> > solve
> > > >their problems (2nd in the all-time ranking, 1st in the last 30
> days). A
> > > >similar le
> > > > > >>>>
>> > > > > >>>>> Hi all,
>> > > > > >>>>>
>> > > > > >>>>> On behalf of the Flink PMC, I'm happy to announce that
>> Dian Fu
>> > is
>> > > > now
&
im and his
> > > > colleague
> > > > >Sruthi this Friday at Beam Summit [4]!
> > > > >
> > > > > They will be working to improve the Table API/SQL documentation
> over
> > a
> > > > > 3-month period, with the support
> good idea in general. The issue reminded us that Kafka didn't
> >>>>> have an idempotent/fault-tolerant Producer before Kafka 0.11.0.
> >>>>> By now we have had the "modern" Kafka connector that roughly
> >>>>>
Hi Piotr,
thank you very much for addressing this issue. I'm convinced that the
approach is the right solution also in contrast to the alternatives.
Ultimately, only WatermarkGenratorWithIdleness needs to be adjusted with
this change.
My only concerns are regarding the actual code.
1. `RelativeCl
Maybe the java doc should be clarified?
> > >
> > > > 4. I now realized that I changed semantics compared to the proposal:
> > this
> > > > idle clock would already calculate the time difference (now - last
> > > event).
> > > >
Sorry for being late to the party. I saw your call to vote and looked at
the FLIP.
First, most of the design is looking really good and it will be good to
have another connector integrated into the AWS ecosystem. A couple of
questions/remarks:
1) Since we only have 1 split, we should also limit th
che/flink/api/common/state/CheckpointListener.html
> *[2] *
> https://github.com/apache/flink/blob/master//flink-core/src/main/java/org/apache/flink/api/connector/source/DynamicParallelismInference.java#L29
> *[3]*
> https://github.com/aws/aws-sdk-java/blob/61d73631fac8535ad70666bbce9e70
we will support this functionality in
> the future. However, in the meantime, this can still be achieved by
> creating multiple sources with specific queues.*
>
>
> Please review and let us know your feedback on this.
>
> [1]
> https://docs.aws.amazon.com/AWSSimpleQueue
s and checksums are correct
> >> - There are no binaries in the source archive
> >> - Contents of mvn dist look good
> >> - Binary signatures and checksums are correct
> >> - CI build of tag successful [1]
> >> - NOTICE and LICENSE files look correct
&
tent/repositories/staging/org/apache/flink/flink-connector-kafka/3.3.0-1.19/
[2]
https://repository.apache.org/content/repositories/staging/org/apache/flink/flink-connector-kafka/3.3.0-1.20/
On Tue, Oct 15, 2024 at 8:21 AM Arvid Heise wrote:
> That's a good catch Leonard. I'll ch
gt;
>
> Best Regards
> Ahmed Hamdy
>
>
> On Sat, 12 Oct 2024 at 05:23, Yanquan Lv wrote:
>
> > +1 (non-binding)
> > I checked:
> > - Review JIRA release notes
> > - Verify hashes
> > - Verify signatures
> > - Build from source with JDK 8/11/17
new release [6].
* CI build of the tag [7].
The vote will be open for at least 72 hours. It is adopted by majority
approval, with at least 3 PMC affirmative votes.
Thanks,
Arvid Heise
[1]
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354606
[2]
h
n use
> > 3.3.0 for the new lineage feature in 1.20 and drop support for 1.18
> > and 1.19.
> >
> > And for the 4.0.0-preview version I'd like to help with it :-)
> >
> > Best,
> > Qingsheng
> >
> > On Fri, Sep 27, 2024 at 6:13 PM Arvid He
> * Diff of git tag checkout with downloaded sources
> > * Verified SHA512 checksums & GPG certification
> > * Checked that all POMs have the right expected version
> > * Generated diffs to compare pom file changes with NOTICE files
> >
> > Thanks Arvid. Looks good
I'm happy to announce that we have unanimously approved this release.
There are 3 approving votes, 3 of which are binding:
* Matthias Pohl (binding)
* Rui Fan (binding)
* Leonard Xu (binding)
We also had approving votes for the version with incorrect snapshot names:
* Yanquan Lv
* Ahmed Hamdy
* D
Hey,
could you please check if a bucket assigner is already enough? If not,
what's missing?
FileSink orcSink = FileSink
.forBulkFormat(new
Path("s3a://mybucket/flink_file_sink_orc_test"), factory)
.withBucketAssigner(new
DateTimeBucketAssigner<>("'dt='MMdd/'hour='HH",
Hi Robert, thanks for looping me in.
I have looked at the branch and the FLIP. Apicurio looks like a promising
alternative to Confluent SR and I'm certain that it's a good addition to
Flink.
However, at the current form it looks heavily overengineered. I'm
suspecting that comes from the attempt t
.
> Updated Google Doc Link -
> https://docs.google.com/document/d/1lreo27jNh0LkRs1Mj9B3wj3itrzMa38D4_XGryOIFks/edit?usp=sharing
>
> Thanks
> Saurabh & Abhi
>
>
> -----
> *Slack Conversation Details **s
+1 (binding),
Best,
Arvid
On Tue, Sep 3, 2024 at 12:35 PM Saurabh Singh
wrote:
> Hi Flink Devs,
>
> Gentle Reminder for voting on FLIP-477: Amazon SQS Source Connector [1].
> [1] https://cwiki.apache.org/confluence/display/FLINK/FLIP-477
> +Amazon+SQS+Source+Connector
>
> Regards
> Saurabh & A
Dear Flink devs,
I'd like to initiate three(!) Kafka connector releases. The main reason for
having three releases is that we have been slacking a bit in keeping up
with the latest changes.
Here is the summary:
1. Release kafka-3.3.0 targeting 1.19 and 1.20 (asap)
- Incorporates lots of deprecati
ould it be worth have a config switch to enable the lineage in the
> connector so that we could use it with 1.19? We could maybe do a 3.3 if
> this was the case.
>
> WDYT?
>Kind regards, David.
>
>
>
> From: Arvid Heise
> Date: Friday, 27 September 2024 at 09:24
not dropping support for 1.18 in 3.2.1,
> and we release one more version 3.2.1-1.20? Then we can use 3.3.0 for the
> new lineage feature in 1.20 and drop support for 1.18 and 1.19.
>
> This is a possibility, but still results in 3 releases, so only worth it if
> it is simpl
The Apache Flink community is very happy to announce the release of Apache
flink-connector-kafka 3.3.0.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The release is available for downlo
to know if there is already work in the community to bump to
> 2.0-preview1. If not, I can help complete this task (but some suggestions
> may be needed for testing the adaptation in the code).
>
>
>
>
>
> > 2024年9月27日 16:23,Arvid Heise 写道:
> >
> > Dear Flink de
Hi everyone,
Please review and vote on release candidate #1 for flink-connector-kafka
v3.4.0, as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)
The complete staging area is available for your review, which includes:
* JIRA release
e of flink-2.0-preview of Flink Kafka
> connector, and I’d like assist it too.
>
> Best,
> Leonard
>
>
> > 2024年11月6日 下午6:58,Arvid Heise 写道:
> >
> > Hi Yanquan,
> >
> > the current state of the 3.4.0 release is that it's still pending on the
ards,
Arvid Heise
gt; > > >
> > > > > On Nov 15, 2024, at 10:00 AM, Yanquan Lv
> > wrote:
> > > > >
> > > > > Thanks Arvid for driving it.
> > > > >
> > > > > +1 (non-binding)
> > > > > I checked:
>
I'm happy to announce that we have unanimously approved this release.
There are 5 approving votes, 3 of which are binding:
* Yanquan Lv
* Leonard Xu (binding)
* Martijn Visser (binding)
* Ahmed Hamdy
* Danny Cranmer (binding)
There are no disapproving votes.
Thanks everyone!
Arvid
+1 (binding)
- Built from sources with JDK 11 and executed all tests successfully, also
against 1.19.1 distribution
- Verified checksum and signature on sources
- No binaries in sources
- Double-check flink-web PR
- Verified artifacts
- Checked JIRA release notes
On Wed, Nov 20, 2024 at 3:22 PM G
es.apache.org/jira/browse/FLINK-36568
>
>
>
> > 2024年11月15日 04:09,Arvid Heise 写道:
> >
> > Hi all,
> >
> > I just sent out the first release candidate for 3.4.0. The only difference
> > to 3.3.0 is that we cease to support 1.19 in favor of supporting the o
pen transactions.”. When/why do we need to re-commit transactions and
> when do we abort?
> * I wonder why “we expect 3 transactional ids to be in use per subtask”.
> * I wonder why “There is at least one finalized transaction waiting to be
> committed” Do you mean the transaction is
; >
> > > > > >
> > > > > > At 2025-03-27 19:16:22, "Roman Khachatryan"
> > > wrote:
> > > > > > >+1 (binding)
> > > > > > >
> > > > > > >Regards,
> > > > > > >Roma
sing
>producerIdFilters (if supported) to retrieve only those transactions
>relevant to this writer? Doing so could reduce unnecessary overhead,
>especially in environments with many transactions.
>
>
> I will be grateful if you can answer me and add these details to
Congratz!
On Mon, Apr 7, 2025 at 6:44 AM Ferenc Csaky
wrote:
> Congrats, Sergey!
>
> Best,
> Ferenc
>
>
> On Monday, April 7th, 2025 at 03:30, weijie guo
> wrote:
>
> >
> >
> > Congratulations, Sergey~
> >
> > Best regards,
> >
> > Weijie
> >
> >
> > Lincoln Lee lincoln.8...@gmail.com 于2025年4月3
Hi folks,
I would like to volunteer to drive the Kafka connector 4.0.0 release to
make the connector finally available to Flink 2.0. If there are no
objections, I'd start tomorrow with the branch cut.
LMK if there are any issues that should be merged until then.
Best,
Arvid
Dear devs,
I'd like to start the voting on FLIP-511 [1].
Don't hesitate to ask for more details and clarifications on the
respective discussion thread [2].
The vote will be open for at least 72 hours unless there is an objection or
not enough votes are casted.
Best,
Arvid
[1]
https://cwiki.ap
MDS. This state is
> checkpointed for fault tolerance and becomes the "source of truth." when
> recovering its internal state.
>
> I hope this clarifies. Will fix the copy paste errors soon.
>
> Cheers,
> Matyas
>
>
> On Wed, Mar 19, 2025 at 8:13 AM Arvid Heise w
hope this flip can show how to solve this problem.
>
> Best
> Hongshun Wang
>
>
>
> On Fri, Feb 28, 2025 at 10:01 PM Arvid Heise wrote:
>
> > Dear Flink devs,
> >
> > I'd like to start discussion around an improvement of the exactly-once
> > Kafka sink
Hi everyone,
Please review and vote on release candidate #1 for flink-connector-kafka
v4.0.0, as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)
The complete staging area is available for your review, which includes:
* JIRA release notes
TypeToken` error message, I've checked that this
> class was not included in this release jar and we did use unshaded guava
> dependencies, I think this is problematic.
> Maybe others can help confirm if this issue exists.
>
>
> [1]
>
> https://repository.apache.org/cont
Dear devs,
The voting[1] of FLIP-511[2] concluded with the following result.
Approving votes:
Efrat Levitan (non-binding)
Roman Khachatryan (binding)
Leonard Xu (binding)
Yuepeng Pan (non-binding)
Martijn Visser (binding)
Gyula Fóra (binding)
Piotr Nowojski (binding)
Aleksandr Savonin (non-bindin
Hi Tom,
thanks for driving this.
Updating the Kafka client library has been done only occasionally and
there was no formal process. It makes sense to give stronger
guidelines. I guess a rigid process would only make sense if we
enforce it for example through bots and that was highly debated a
cou
this version.
> Our CI has already covered version 11, but I think clarifying the compiled
> version from the beginning will help build standards, What do you think?
>
>
> [1] https://lists.apache.org/thread/y9wthpvtnl18hbb3rq5fjbx8g8k6wv2m
>
> Arvid Heise 于2025年4月8日周二 22:09写道:
&
Hi everyone,
Please review and vote on release candidate #2 for flink-connector-kafka
v4.0.0, as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)
The complete staging area is available for your review, which includes:
* JIRA release notes
n is 52, as you and Xiqian mentioned.
> But fortunately, we can confirm and discover the actual problem.
>
> Arvid Heise 于2025年4月8日周二 22:09写道:
>
> > Hi everyone,
> > Please review and vote on release candidate #1 for flink-connector-kafka
> > v4.0.0, as follows:
> >
an Lv
, could you double check? Maybe it was major 52?
Anyhow, the new binaries are major 55. I will contribute my updated release
scripts later today.
Best,
Arvid
On Wed, Apr 9, 2025 at 6:41 AM Arvid Heise wrote:
> Hi Yanquan,
>
> Thank you for noticing that. I do think we should
Hi everyone,
Please review and vote on release candidate #3 for flink-connector-kafka
v4.0.0, as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)
The complete staging area is available for your review, which includes:
* JIRA release notes
Hi Timo,
+1 for the proposal. A couple of remarks:
- Are we going to enforce that the name is a valid class name? What is
happening if it's not a correct name?
- What are the implications of using a class that is not in the classpath
in Table API? It looks to me that the name is metadata-only unti
; > > > - Build the source with Maven 3.9.9 and Java 11 + 17
> > > > > - I ran tests using the staged connector jar as part of a custom
> > build of
> > > > > our [1] Flink SQL Runner (using minikube + Flink K8s Operator
> > 1.11.0) a
I'm happy to announce that we have unanimously approved this release.
There are 7 approving votes, 4 of which are binding:
* Yanquan Lv (non-binding)
* Xiqian YU (non-binding)
* Leonard Xu (binding)
* Tom Cooper (non-binding)
* Weijie guo (binding)
* Sergey Nuyanzin (binding)
* Robert Metzger (bin
ot;);
> >
> > // Tries to resolve `com.example.User` in the classpath, if not present
> > returns `Row`
> > For (Row row : t.execute().collect()) {
> > User user = row.getFieldAs(0, User.class);
> > }
> > ```
> >
> > For Arvid's question: &
e code build also fails on the 3.4 but not on the 3.3 or 3.2
> releases (I didn't go further back). It also does not fail on the last two
> main Flink releases. So it seems that something changed in the process for
> last two Kafka connector releases?
>
> It looks like Arvid Heis
Hi Timo,
thanks for addressing my points. I'm not set on using STRUCT et al. but
wanted to point out the alternatives.
Regarding the attached class name, I have similar confusion to Hao. I
wonder if Structures types shouldn't be anonymous by default in the sense
that initially we don't attach a c
The Apache Flink community is very happy to announce the release of Apache
flink-connector-kafka 4.0.0.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The release is available for download a
Hi Gustavo,
the idea and approach LGTM. +1 to proceed.
Best,
Arvid
On Thu, Apr 24, 2025 at 4:58 PM Gustavo de Morais
wrote:
> Hi everyone,
>
> I'd like to propose FLIP-516: Multi-Way Join Operator [1] for discussion.
>
> Chained non-temporal joins in Flink SQL often cause a "big state issue"
Dear Flink devs,
I'd like to start discussion around an improvement of the exactly-once
Kafka sink. Because of its current design, the sink currently puts a
large strain on the main memory of the Kafka broker [1], which hinders
adoption of the KafkaSink. Since the KafkaSink will be the only way to
Writer to Use BufferWrapper Instead of Deque *
> >
> >- *bufferedRequestEntries is now of type BufferWrapper,
> >making the choice of buffer implementation flexible. *
> >- *A createBuffer() method initializes DequeBufferWrapper by default. *
> >
> > *Updat
Hi Poorvank,
thanks for putting this together. It's obvious to me that this is a
good addition. I wish Ahmed could check if his proposals are
compatible with yours, so we don't end up with two different ways to
express the same thing. Ideally Ahmed's proposal could be retrofitted
to extend on your
LGTM. Thank you.
+1 (binding)
Arvid
On Wed, Mar 19, 2025 at 11:03 AM Hong Liang wrote:
>
> Thanks for driving this!
>
> +1 (binding)
>
> Hong
>
> On Fri, Mar 14, 2025 at 6:01 PM Danny Cranmer
> wrote:
>
> > Thanks for driving this Poorvank.
> >
> > +1 (binding)
> >
> > Thanks,
> > Danny
> >
>
I'd consider this an incomplete release that could just be fixed
without any vote.
On Tue, Mar 18, 2025 at 7:03 AM Leonard Xu wrote:
>
> Thanks Jiabao for letting us know this issue, I think a new correct release
> would help users a lot. +1 from my side.
>
> From a process perspective, we are f
+1, this will save time for everyone.
On Fri, Mar 21, 2025 at 5:58 PM Tom Cooper wrote:
>
> Hey All,
>
> I wanted to start a discussion on enabling linting checks, via a GitHub
> Action, on all PRs in the main Flink repository.
>
> Often when a user submits a PR they will wait for the CI to run
Hi Matyas,
could you please provide more details on the KafkaMetadataService?
Where does it live? How does it work? How does "Metadata state will be
stored in Flink state as the source of truth for the Flink job" work?
Also nit: the test plan contains copy&paste errors.
Best,
Arvid
On Thu, Mar
201 - 300 of 429 matches
Mail list logo