+1 (binding)
Thanks for driving this.
On Thu, Sep 11, 2025 at 8:32 AM Ramin Gharib wrote:
> Hi everyone,
>
> I want to start the vote to approve FLIP-546: Introduce CREATE OR ALTER for
> Materialized Tables [1].
>
> The discussion thread [2] has been active, and the feedback has been
> incorpor
+1 (binding)
Checked the sources
* Matches the git tag
* Hash matches
Checked the website and approved
Checked the binary and ran a small smoke test on datastream against Kafka
CP 7.9
Checked CI
On Thu, Jul 31, 2025 at 10:53 AM Fabian Paul wrote:
> Hi everyone,
> Please review and vote on relea
Hi Zexian,
the general idea and approach LGTM. A couple of questions:
* If we don't want to provide RateLimiter on operator level (seems to be
out of scope for this FLIP), can we still make the RateLimiter a property
of the Source similar to Watermark strategy and pass it through the
implementatio
Arvid Heise created FLINK-38041:
---
Summary: Schema.Builder#fromResolvedSchema dropps comments
Key: FLINK-38041
URL: https://issues.apache.org/jira/browse/FLINK-38041
Project: Flink
Issue Type
e only showing
> nulls in the join output. We think that in this case is could be reasonable
> to use metadata columns in the short term and show nulls in the join output?
> * Can I confirm your thinking on how we would set the schema of the
> side output [1] and for the proposed system
Hi David,
Flink currently has not a compelling story when it comes to error handling
and I'd like to change that.
I'd advocate for an approach that naturally translates into dead letter
queues as known from other stream processors such as KStreams. [1] With
your idea of metadata columns, you are
I'm also in favor of Option A of the presented options.
I had one additional idea (Option F) that mixes two approaches B+E
ROW_WISE_TABLE
SET_PARTITIONED_TABLE
You'd have ROW and SET that comes from the SQL standard but you also have a
stronger connection between SET and partitioning.
Best,
Ar
Arvid Heise created FLINK-37969:
---
Summary: Allow Transformation subgraphs to be merged during
translation
Key: FLINK-37969
URL: https://issues.apache.org/jira/browse/FLINK-37969
Project: Flink
Arvid Heise created FLINK-37933:
---
Summary: Move TransformationScan/SinkProvider out of planner
Key: FLINK-37933
URL: https://issues.apache.org/jira/browse/FLINK-37933
Project: Flink
Issue Type
Arvid Heise created FLINK-37856:
---
Summary: Sink option hints are not present in compiled plan
Key: FLINK-37856
URL: https://issues.apache.org/jira/browse/FLINK-37856
Project: Flink
Issue Type
Hi Fred,
ah yes, I think I understand the issue. The KafkaSink always creates a
KafkaCommitter even if you are not using EXACTLY_ONCE. It's an unfortunate
limitation of our Sink design.
When I implemented the change, I was assuming that you are running EOS if
there is a committer (because else its
and use that in
> our application. If you can add a fix we can pick this up (and test this
> already 😊 ).
>
> Kind regards,
> Fred
>
> *From: *Arvid Heise
> *Date: *Tuesday, 20 May 2025 at 09:12
> *To: *Teunissen, F.G.J. (Fred)
> *Cc: *dev@flink.apache.org
> *Subject: *R
Arvid Heise created FLINK-37818:
---
Summary: Kafka connector requires unqiue TransactionalIdPrefix
even for non-EOS settings.
Key: FLINK-37818
URL: https://issues.apache.org/jira/browse/FLINK-37818
+1 (binding)
Cheers
On Wed, May 7, 2025 at 6:37 PM Gustavo de Morais
wrote:
> Hi everyone,
>
> I'd like to start voting on FLIP-516: Multi-Way Join Operator [1]. The
> discussion can be found in this thread [2].
> The vote will be open for at least 72 hours, unless there is an objection
> or no
n that
> under
> > > the
> > > > > > rejected alternatives.
> > > > > >
> > > > > > Kind regards,
> > > > > > Gustavo
> > > > > >
> > > > > > Am Mo., 28. Apr. 2025 um 14:18 Uhr sc
Hi Gustavo,
the idea and approach LGTM. +1 to proceed.
Best,
Arvid
On Thu, Apr 24, 2025 at 4:58 PM Gustavo de Morais
wrote:
> Hi everyone,
>
> I'd like to propose FLIP-516: Multi-Way Join Operator [1] for discussion.
>
> Chained non-temporal joins in Flink SQL often cause a "big state issue"
ot;);
> >
> > // Tries to resolve `com.example.User` in the classpath, if not present
> > returns `Row`
> > For (Row row : t.execute().collect()) {
> > User user = row.getFieldAs(0, User.class);
> > }
> > ```
> >
> > For Arvid's question: &
The Apache Flink community is very happy to announce the release of Apache
flink-connector-kafka 4.0.0.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The release is available for download a
Hi Timo,
thanks for addressing my points. I'm not set on using STRUCT et al. but
wanted to point out the alternatives.
Regarding the attached class name, I have similar confusion to Hao. I
wonder if Structures types shouldn't be anonymous by default in the sense
that initially we don't attach a c
I'm happy to announce that we have unanimously approved this release.
There are 7 approving votes, 4 of which are binding:
* Yanquan Lv (non-binding)
* Xiqian YU (non-binding)
* Leonard Xu (binding)
* Tom Cooper (non-binding)
* Weijie guo (binding)
* Sergey Nuyanzin (binding)
* Robert Metzger (bin
; > > > - Build the source with Maven 3.9.9 and Java 11 + 17
> > > > > - I ran tests using the staged connector jar as part of a custom
> > build of
> > > > > our [1] Flink SQL Runner (using minikube + Flink K8s Operator
> > 1.11.0) a
Hi Timo,
+1 for the proposal. A couple of remarks:
- Are we going to enforce that the name is a valid class name? What is
happening if it's not a correct name?
- What are the implications of using a class that is not in the classpath
in Table API? It looks to me that the name is metadata-only unti
e code build also fails on the 3.4 but not on the 3.3 or 3.2
> releases (I didn't go further back). It also does not fail on the last two
> main Flink releases. So it seems that something changed in the process for
> last two Kafka connector releases?
>
> It looks like Arvid Heis
Hi everyone,
Please review and vote on release candidate #3 for flink-connector-kafka
v4.0.0, as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)
The complete staging area is available for your review, which includes:
* JIRA release notes
TypeToken` error message, I've checked that this
> class was not included in this release jar and we did use unshaded guava
> dependencies, I think this is problematic.
> Maybe others can help confirm if this issue exists.
>
>
> [1]
>
> https://repository.apache.org/cont
an Lv
, could you double check? Maybe it was major 52?
Anyhow, the new binaries are major 55. I will contribute my updated release
scripts later today.
Best,
Arvid
On Wed, Apr 9, 2025 at 6:41 AM Arvid Heise wrote:
> Hi Yanquan,
>
> Thank you for noticing that. I do think we should
n is 52, as you and Xiqian mentioned.
> But fortunately, we can confirm and discover the actual problem.
>
> Arvid Heise 于2025年4月8日周二 22:09写道:
>
> > Hi everyone,
> > Please review and vote on release candidate #1 for flink-connector-kafka
> > v4.0.0, as follows:
> >
Hi everyone,
Please review and vote on release candidate #2 for flink-connector-kafka
v4.0.0, as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)
The complete staging area is available for your review, which includes:
* JIRA release notes
this version.
> Our CI has already covered version 11, but I think clarifying the compiled
> version from the beginning will help build standards, What do you think?
>
>
> [1] https://lists.apache.org/thread/y9wthpvtnl18hbb3rq5fjbx8g8k6wv2m
>
> Arvid Heise 于2025年4月8日周二 22:09写道:
&
Hi everyone,
Please review and vote on release candidate #1 for flink-connector-kafka
v4.0.0, as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)
The complete staging area is available for your review, which includes:
* JIRA release notes
Arvid Heise created FLINK-37622:
---
Summary: Exactly once Kafka sink does not produce any records in
batch mode
Key: FLINK-37622
URL: https://issues.apache.org/jira/browse/FLINK-37622
Project: Flink
Congratz!
On Mon, Apr 7, 2025 at 6:44 AM Ferenc Csaky
wrote:
> Congrats, Sergey!
>
> Best,
> Ferenc
>
>
> On Monday, April 7th, 2025 at 03:30, weijie guo
> wrote:
>
> >
> >
> > Congratulations, Sergey~
> >
> > Best regards,
> >
> > Weijie
> >
> >
> > Lincoln Lee lincoln.8...@gmail.com 于2025年4月3
Hi folks,
I would like to volunteer to drive the Kafka connector 4.0.0 release to
make the connector finally available to Flink 2.0. If there are no
objections, I'd start tomorrow with the branch cut.
LMK if there are any issues that should be merged until then.
Best,
Arvid
sing
>producerIdFilters (if supported) to retrieve only those transactions
>relevant to this writer? Doing so could reduce unnecessary overhead,
>especially in environments with many transactions.
>
>
> I will be grateful if you can answer me and add these details to
; >
> > > > > >
> > > > > > At 2025-03-27 19:16:22, "Roman Khachatryan"
> > > wrote:
> > > > > > >+1 (binding)
> > > > > > >
> > > > > > >Regards,
> > > > > > >Roma
pen transactions.”. When/why do we need to re-commit transactions and
> when do we abort?
> * I wonder why “we expect 3 transactional ids to be in use per subtask”.
> * I wonder why “There is at least one finalized transaction waiting to be
> committed” Do you mean the transaction is
Arvid Heise created FLINK-37611:
---
Summary: Deflake
ExactlyOnceKafkaWriterITCase#shouldAbortLingeringTransactions
Key: FLINK-37611
URL: https://issues.apache.org/jira/browse/FLINK-37611
Project: Flink
Arvid Heise created FLINK-37613:
---
Summary: Fix resource leak during transaction abortion
Key: FLINK-37613
URL: https://issues.apache.org/jira/browse/FLINK-37613
Project: Flink
Issue Type: Bug
Arvid Heise created FLINK-37612:
---
Summary: Exception during initialization may leave operator
unclosed
Key: FLINK-37612
URL: https://issues.apache.org/jira/browse/FLINK-37612
Project: Flink
Arvid Heise created FLINK-37605:
---
Summary: SinkWriter may incorrectly infer end of input during
rescale
Key: FLINK-37605
URL: https://issues.apache.org/jira/browse/FLINK-37605
Project: Flink
Hi Tom,
thanks for driving this.
Updating the Kafka client library has been done only occasionally and
there was no formal process. It makes sense to give stronger
guidelines. I guess a rigid process would only make sense if we
enforce it for example through bots and that was highly debated a
cou
Dear devs,
The voting[1] of FLIP-511[2] concluded with the following result.
Approving votes:
Efrat Levitan (non-binding)
Roman Khachatryan (binding)
Leonard Xu (binding)
Yuepeng Pan (non-binding)
Martijn Visser (binding)
Gyula Fóra (binding)
Piotr Nowojski (binding)
Aleksandr Savonin (non-bindin
LGTM. Thank you.
+1 (binding)
Arvid
On Wed, Mar 19, 2025 at 11:03 AM Hong Liang wrote:
>
> Thanks for driving this!
>
> +1 (binding)
>
> Hong
>
> On Fri, Mar 14, 2025 at 6:01 PM Danny Cranmer
> wrote:
>
> > Thanks for driving this Poorvank.
> >
> > +1 (binding)
> >
> > Thanks,
> > Danny
> >
>
I'd consider this an incomplete release that could just be fixed
without any vote.
On Tue, Mar 18, 2025 at 7:03 AM Leonard Xu wrote:
>
> Thanks Jiabao for letting us know this issue, I think a new correct release
> would help users a lot. +1 from my side.
>
> From a process perspective, we are f
+1, this will save time for everyone.
On Fri, Mar 21, 2025 at 5:58 PM Tom Cooper wrote:
>
> Hey All,
>
> I wanted to start a discussion on enabling linting checks, via a GitHub
> Action, on all PRs in the main Flink repository.
>
> Often when a user submits a PR they will wait for the CI to run
Arvid Heise created FLINK-37573:
---
Summary: Deprecate Java 8 in Kafka
Key: FLINK-37573
URL: https://issues.apache.org/jira/browse/FLINK-37573
Project: Flink
Issue Type: Sub-task
Dear devs,
I'd like to start the voting on FLIP-511 [1].
Don't hesitate to ask for more details and clarifications on the
respective discussion thread [2].
The vote will be open for at least 72 hours unless there is an objection or
not enough votes are casted.
Best,
Arvid
[1]
https://cwiki.ap
hope this flip can show how to solve this problem.
>
> Best
> Hongshun Wang
>
>
>
> On Fri, Feb 28, 2025 at 10:01 PM Arvid Heise wrote:
>
> > Dear Flink devs,
> >
> > I'd like to start discussion around an improvement of the exactly-once
> > Kafka sink
MDS. This state is
> checkpointed for fault tolerance and becomes the "source of truth." when
> recovering its internal state.
>
> I hope this clarifies. Will fix the copy paste errors soon.
>
> Cheers,
> Matyas
>
>
> On Wed, Mar 19, 2025 at 8:13 AM Arvid Heise w
Hi Matyas,
could you please provide more details on the KafkaMetadataService?
Where does it live? How does it work? How does "Metadata state will be
stored in Flink state as the source of truth for the Flink job" work?
Also nit: the test plan contains copy&paste errors.
Best,
Arvid
On Thu, Mar
Dear Flink devs,
I'd like to start discussion around an improvement of the exactly-once
Kafka sink. Because of its current design, the sink currently puts a
large strain on the main memory of the Kafka broker [1], which hinders
adoption of the KafkaSink. Since the KafkaSink will be the only way to
Writer to Use BufferWrapper Instead of Deque *
> >
> >- *bufferedRequestEntries is now of type BufferWrapper,
> >making the choice of buffer implementation flexible. *
> >- *A createBuffer() method initializes DequeBufferWrapper by default. *
> >
> > *Updat
Hi Poorvank,
thanks for putting this together. It's obvious to me that this is a
good addition. I wish Ahmed could check if his proposals are
compatible with yours, so we don't end up with two different ways to
express the same thing. Ideally Ahmed's proposal could be retrofitted
to extend on your
Arvid Heise created FLINK-37351:
---
Summary: Ensure writer and committer are colocated
Key: FLINK-37351
URL: https://issues.apache.org/jira/browse/FLINK-37351
Project: Flink
Issue Type: Bug
Arvid Heise created FLINK-37330:
---
Summary: Add state migration test for Flink 2.0
Key: FLINK-37330
URL: https://issues.apache.org/jira/browse/FLINK-37330
Project: Flink
Issue Type: Improvement
es.apache.org/jira/browse/FLINK-36568
>
>
>
> > 2024年11月15日 04:09,Arvid Heise 写道:
> >
> > Hi all,
> >
> > I just sent out the first release candidate for 3.4.0. The only difference
> > to 3.3.0 is that we cease to support 1.19 in favor of supporting the o
Arvid Heise created FLINK-37282:
---
Summary: Add backchannel for producer recycling
Key: FLINK-37282
URL: https://issues.apache.org/jira/browse/FLINK-37282
Project: Flink
Issue Type: Improvement
Arvid Heise created FLINK-37281:
---
Summary: Refactor KafkaSinkITCase
Key: FLINK-37281
URL: https://issues.apache.org/jira/browse/FLINK-37281
Project: Flink
Issue Type: Improvement
Arvid Heise created FLINK-37108:
---
Summary: Source/Sink test suites test invalid recovery
Key: FLINK-37108
URL: https://issues.apache.org/jira/browse/FLINK-37108
Project: Flink
Issue Type
Arvid Heise created FLINK-36807:
---
Summary: Add coverage for pre-writer and pre-commit topologies for
SinkV2
Key: FLINK-36807
URL: https://issues.apache.org/jira/browse/FLINK-36807
Project: Flink
ards,
Arvid Heise
I'm happy to announce that we have unanimously approved this release.
There are 5 approving votes, 3 of which are binding:
* Yanquan Lv
* Leonard Xu (binding)
* Martijn Visser (binding)
* Ahmed Hamdy
* Danny Cranmer (binding)
There are no disapproving votes.
Thanks everyone!
Arvid
gt; > > >
> > > > > On Nov 15, 2024, at 10:00 AM, Yanquan Lv
> > wrote:
> > > > >
> > > > > Thanks Arvid for driving it.
> > > > >
> > > > > +1 (non-binding)
> > > > > I checked:
>
Arvid Heise created FLINK-36788:
---
Summary: Add coverage for GlobalCommitter for SinkV2
Key: FLINK-36788
URL: https://issues.apache.org/jira/browse/FLINK-36788
Project: Flink
Issue Type: Bug
+1 (binding)
- Built from sources with JDK 11 and executed all tests successfully, also
against 1.19.1 distribution
- Verified checksum and signature on sources
- No binaries in sources
- Double-check flink-web PR
- Verified artifacts
- Checked JIRA release notes
On Wed, Nov 20, 2024 at 3:22 PM G
e of flink-2.0-preview of Flink Kafka
> connector, and I’d like assist it too.
>
> Best,
> Leonard
>
>
> > 2024年11月6日 下午6:58,Arvid Heise 写道:
> >
> > Hi Yanquan,
> >
> > the current state of the 3.4.0 release is that it's still pending on the
Hi everyone,
Please review and vote on release candidate #1 for flink-connector-kafka
v3.4.0, as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)
The complete staging area is available for your review, which includes:
* JIRA release
to know if there is already work in the community to bump to
> 2.0-preview1. If not, I can help complete this task (but some suggestions
> may be needed for testing the adaptation in the code).
>
>
>
>
>
> > 2024年9月27日 16:23,Arvid Heise 写道:
> >
> > Dear Flink de
Hey,
could you please check if a bucket assigner is already enough? If not,
what's missing?
FileSink orcSink = FileSink
.forBulkFormat(new
Path("s3a://mybucket/flink_file_sink_orc_test"), factory)
.withBucketAssigner(new
DateTimeBucketAssigner<>("'dt='MMdd/'hour='HH",
The Apache Flink community is very happy to announce the release of Apache
flink-connector-kafka 3.3.0.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The release is available for downlo
I'm happy to announce that we have unanimously approved this release.
There are 3 approving votes, 3 of which are binding:
* Matthias Pohl (binding)
* Rui Fan (binding)
* Leonard Xu (binding)
We also had approving votes for the version with incorrect snapshot names:
* Yanquan Lv
* Ahmed Hamdy
* D
> * Diff of git tag checkout with downloaded sources
> > * Verified SHA512 checksums & GPG certification
> > * Checked that all POMs have the right expected version
> > * Generated diffs to compare pom file changes with NOTICE files
> >
> > Thanks Arvid. Looks good
tent/repositories/staging/org/apache/flink/flink-connector-kafka/3.3.0-1.19/
[2]
https://repository.apache.org/content/repositories/staging/org/apache/flink/flink-connector-kafka/3.3.0-1.20/
On Tue, Oct 15, 2024 at 8:21 AM Arvid Heise wrote:
> That's a good catch Leonard. I'll ch
s and checksums are correct
> >> - There are no binaries in the source archive
> >> - Contents of mvn dist look good
> >> - Binary signatures and checksums are correct
> >> - CI build of tag successful [1]
> >> - NOTICE and LICENSE files look correct
&
gt;
>
> Best Regards
> Ahmed Hamdy
>
>
> On Sat, 12 Oct 2024 at 05:23, Yanquan Lv wrote:
>
> > +1 (non-binding)
> > I checked:
> > - Review JIRA release notes
> > - Verify hashes
> > - Verify signatures
> > - Build from source with JDK 8/11/17
new release [6].
* CI build of the tag [7].
The vote will be open for at least 72 hours. It is adopted by majority
approval, with at least 3 PMC affirmative votes.
Thanks,
Arvid Heise
[1]
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354606
[2]
h
Arvid Heise created FLINK-36455:
---
Summary: Sink should commit everything on notifyCheckpointCompleted
Key: FLINK-36455
URL: https://issues.apache.org/jira/browse/FLINK-36455
Project: Flink
Arvid Heise created FLINK-36454:
---
Summary: FlinkContainer always overwrites default config
Key: FLINK-36454
URL: https://issues.apache.org/jira/browse/FLINK-36454
Project: Flink
Issue Type
Arvid Heise created FLINK-36441:
---
Summary: Resource leaks in Kafka Tests
Key: FLINK-36441
URL: https://issues.apache.org/jira/browse/FLINK-36441
Project: Flink
Issue Type: Bug
Affects
Arvid Heise created FLINK-36434:
---
Summary: Revise threading model of (KafkaPartition)SplitReader
Key: FLINK-36434
URL: https://issues.apache.org/jira/browse/FLINK-36434
Project: Flink
Issue
not dropping support for 1.18 in 3.2.1,
> and we release one more version 3.2.1-1.20? Then we can use 3.3.0 for the
> new lineage feature in 1.20 and drop support for 1.18 and 1.19.
>
> This is a possibility, but still results in 3 releases, so only worth it if
> it is simpl
n use
> > 3.3.0 for the new lineage feature in 1.20 and drop support for 1.18
> > and 1.19.
> >
> > And for the 4.0.0-preview version I'd like to help with it :-)
> >
> > Best,
> > Qingsheng
> >
> > On Fri, Sep 27, 2024 at 6:13 PM Arvid He
ould it be worth have a config switch to enable the lineage in the
> connector so that we could use it with 1.19? We could maybe do a 3.3 if
> this was the case.
>
> WDYT?
>Kind regards, David.
>
>
>
> From: Arvid Heise
> Date: Friday, 27 September 2024 at 09:24
Dear Flink devs,
I'd like to initiate three(!) Kafka connector releases. The main reason for
having three releases is that we have been slacking a bit in keeping up
with the latest changes.
Here is the summary:
1. Release kafka-3.3.0 targeting 1.19 and 1.20 (asap)
- Incorporates lots of deprecati
+1 (binding),
Best,
Arvid
On Tue, Sep 3, 2024 at 12:35 PM Saurabh Singh
wrote:
> Hi Flink Devs,
>
> Gentle Reminder for voting on FLIP-477: Amazon SQS Source Connector [1].
> [1] https://cwiki.apache.org/confluence/display/FLINK/FLIP-477
> +Amazon+SQS+Source+Connector
>
> Regards
> Saurabh & A
.
> Updated Google Doc Link -
> https://docs.google.com/document/d/1lreo27jNh0LkRs1Mj9B3wj3itrzMa38D4_XGryOIFks/edit?usp=sharing
>
> Thanks
> Saurabh & Abhi
>
>
> -----
> *Slack Conversation Details **s
Arvid Heise created FLINK-36379:
---
Summary: Improve (Global)Committer with UC disabled
Key: FLINK-36379
URL: https://issues.apache.org/jira/browse/FLINK-36379
Project: Flink
Issue Type: Bug
Arvid Heise created FLINK-36368:
---
Summary: Fix subtask management in CommittableCollector
Key: FLINK-36368
URL: https://issues.apache.org/jira/browse/FLINK-36368
Project: Flink
Issue Type
Hi Robert, thanks for looping me in.
I have looked at the branch and the FLIP. Apicurio looks like a promising
alternative to Confluent SR and I'm certain that it's a good addition to
Flink.
However, at the current form it looks heavily overengineered. I'm
suspecting that comes from the attempt t
we will support this functionality in
> the future. However, in the meantime, this can still be achieved by
> creating multiple sources with specific queues.*
>
>
> Please review and let us know your feedback on this.
>
> [1]
> https://docs.aws.amazon.com/AWSSimpleQueue
Arvid Heise created FLINK-36287:
---
Summary: Sink with topologies should not participate in UC
Key: FLINK-36287
URL: https://issues.apache.org/jira/browse/FLINK-36287
Project: Flink
Issue Type
Arvid Heise created FLINK-36278:
---
Summary: Fix Kafka connector logs getting too big
Key: FLINK-36278
URL: https://issues.apache.org/jira/browse/FLINK-36278
Project: Flink
Issue Type: Technical
che/flink/api/common/state/CheckpointListener.html
> *[2] *
> https://github.com/apache/flink/blob/master//flink-core/src/main/java/org/apache/flink/api/connector/source/DynamicParallelismInference.java#L29
> *[3]*
> https://github.com/aws/aws-sdk-java/blob/61d73631fac8535ad70666bbce9e70
Sorry for being late to the party. I saw your call to vote and looked at
the FLIP.
First, most of the design is looking really good and it will be good to
have another connector integrated into the AWS ecosystem. A couple of
questions/remarks:
1) Since we only have 1 split, we should also limit th
Arvid Heise created FLINK-36177:
---
Summary: Deprecating KafkaShuffle
Key: FLINK-36177
URL: https://issues.apache.org/jira/browse/FLINK-36177
Project: Flink
Issue Type: Technical Debt
Arvid Heise created FLINK-36176:
---
Summary: Remove support for ancient Kafka versions
Key: FLINK-36176
URL: https://issues.apache.org/jira/browse/FLINK-36176
Project: Flink
Issue Type
Maybe the java doc should be clarified?
> > >
> > > > 4. I now realized that I changed semantics compared to the proposal:
> > this
> > > > idle clock would already calculate the time difference (now - last
> > > event).
> > > >
Hi Piotr,
thank you very much for addressing this issue. I'm convinced that the
approach is the right solution also in contrast to the alternatives.
Ultimately, only WatermarkGenratorWithIdleness needs to be adjusted with
this change.
My only concerns are regarding the actual code.
1. `RelativeCl
Arvid Heise created FLINK-35796:
---
Summary: Ensure that MailboxExecutor.submit is used correctly
Key: FLINK-35796
URL: https://issues.apache.org/jira/browse/FLINK-35796
Project: Flink
Issue
Hi Qingsheng,
Thanks for driving this; the inconsistency was not satisfying for me.
I second Alexander's idea though but could also live with an easier
solution as the first step: Instead of making caching an implementation
detail of TableFunction X, rather devise a caching layer around X. So the
1 - 100 of 431 matches
Mail list logo