Hi Chesnay,
Thanks for joining this discussion and sharing your thoughts!
> Connectors shouldn't depend on flink-shaded.
>
Perfect! We are on the same page. If you could read through the discussion,
you would realize that, currently, there are many connectors depend on
flink-shaded.
> Connect
Thanks Peter for starting the FLIP.
Overall, this seems pretty straightforward and overdue, +1.
Two quick question / comments:
1. Can you rename the FLIP to something less generic? Perhaps "Provide
initialization context for Committer creation in TwoPhaseCommittingSink"?
2. Can you desc
Jing Ge created FLINK-33194:
---
Summary: AWS Connector should directly depend on 3rd-party libs
instead of flink-shaded repo
Key: FLINK-33194
URL: https://issues.apache.org/jira/browse/FLINK-33194
Project: Fl
Jing Ge created FLINK-33193:
---
Summary: JDBC Connector should directly depend on 3rd-party libs
instead of flink-shaded repo
Key: FLINK-33193
URL: https://issues.apache.org/jira/browse/FLINK-33193
Project: F
Hey Jing,
If you went through the discussion, you would see it has never been
shifted towards "ignore". The only concern in the discussion was we'd
have too many options and that lookup joins require them. It was never
questioned we should not throw an exception that was suggested in the
firs
Jing Ge created FLINK-33191:
---
Summary: Kafka Connector should directly depend on 3rd-party libs
instead of flink-shaded repo
Key: FLINK-33191
URL: https://issues.apache.org/jira/browse/FLINK-33191
Project:
Jing Ge created FLINK-33190:
---
Summary: Externalized Connectors should directly depend on
3rd-party libs instead of shaded repo
Key: FLINK-33190
URL: https://issues.apache.org/jira/browse/FLINK-33190
Projec
Hi David,
Glad to hear you back!
> Agreed; in my mind, this boils down to the ability to quickly allocate new
slots (TMs). This might differ between environments though.
Yes, for interactive queries cold-start is a very tricky issue to dealing
with,
we should consider not only about allocating n
Hi Dawid,
Thanks for the clarification. If you could go through the discussion, you
would be aware that the focus has been moved from "disable" to "ignore".
There was an alignment only on "ignore hints". Your suggestion bypassed the
alignment and mixed everything together. That confused me a bit.
Vlado Vojdanovski created FLINK-33189:
-
Summary:
FsCompletedCheckpointStorageLocation#disposeStorageLocation non-recursively
deletes a directory
Key: FLINK-33189
URL: https://issues.apache.org/jira/browse/FLI
Hi Dawid,
Thanks for bringing this.
I would agree with enum approach
ignored option would allow to follow Oracle's behavior as well
>table.optimizer.query-options = ENABLED/DISABLED/IGNORED
nit: Can we have "hint" in config option name
e.g. table.optimizer.query-options-hints ?
On Tue, Oct 3,
Hi Flinkers,
I'm trying to use MapState, where the value will be a list of type elements.
Wanted to check if anyone else faced the same issue while trying to use
MapState in PyFlink with complex types.
Here is the code:
from pyflink.common import Time
from pyflink.common.typeinfo import Types
Elkhan Dadashov created FLINK-33188:
---
Summary: PyFlink MapState with Types.ROW() throws exception
Key: FLINK-33188
URL: https://issues.apache.org/jira/browse/FLINK-33188
Project: Flink
Issu
Hi all,
Peter, Marton, Gordon and I had an offline sync on SinkV2 and I'm
happy with this first FLIP on the topic. +1
Best regards,
Martijn
On Wed, Oct 4, 2023 at 5:48 PM Márton Balassi wrote:
>
> Thanks, Peter. I agree that this is needed for Iceberg and beneficial for
> other connectors too.
Clara Xiong created FLINK-33187:
---
Summary: Don't send duplicate event to Kafka if no change
Key: FLINK-33187
URL: https://issues.apache.org/jira/browse/FLINK-33187
Project: Flink
Issue Type: Im
Sergey Nuyanzin created FLINK-33186:
---
Summary:
CheckpointAfterAllTasksFinishedITCase.testRestoreAfterSomeTasksFinished fails
on AZP
Key: FLINK-33186
URL: https://issues.apache.org/jira/browse/FLINK-33186
Sergey Nuyanzin created FLINK-33185:
---
Summary: HybridShuffleITCase fails with TimeoutException: Pending
slot request timed out in slot pool.
Key: FLINK-33185
URL: https://issues.apache.org/jira/browse/FLINK-3318
+1 for the convenience of users.
On Wed, Oct 4, 2023 at 8:05 AM Matthias Pohl
wrote:
> +1 Sounds like a good idea.
>
> On Wed, Oct 4, 2023 at 5:04 PM Gyula Fóra wrote:
>
> > I will share my initial implementation soon, it seems to be pretty
> > straightforward.
> >
> > Biggest challenge so far
Sergey Nuyanzin created FLINK-33184:
---
Summary: HybridShuffleITCase fails with exception in resource
cleanup of task Map
Key: FLINK-33184
URL: https://issues.apache.org/jira/browse/FLINK-33184
Projec
Hi David,
First of all, we should have enough time to wait for those issues to
be resolved. Secondly, it makes less sense to block upstream release by
downstream build issues. In case, those issues might need more time, we
should move forward with the Flink release without waiting for them. WDYT?
Hi ,
As release 1.18 removes the kafka connector from the core Flink repository, I
assume we will wait until the kafka connector nightly build issues
https://issues.apache.org/jira/browse/FLINK-33104 and
https://issues.apache.org/jira/browse/FLINK-33017 are resolved before releasing
1.18?
Thanks, Peter. I agree that this is needed for Iceberg and beneficial for
other connectors too.
+1
On Wed, Oct 4, 2023 at 3:56 PM Péter Váry
wrote:
> Hi Team,
>
> In my previous email[1] I have described our challenges migrating the
> existing Iceberg SinkFunction based implementation, to the n
Hi,
I was looking at the pr backlog in the Flink repository and realise that there
are 51 hits on the search
https://github.com/apache/flink/pulls?q=is%3Apr+is%3Aopen+kafka-connector.
And 25 hits on
https://github.com/apache/flink/pulls?q=is%3Apr+is%3Aopen+kafka-connector+label%3Acomponent%3DCo
Hi Ryan,
I agree that good communication is key to determining what can be worked on.
In terms of metrics , we can use the gh cli to list prs and we can export
issues from Jira. A view across them, you could join on the Flink issue (at the
start of the pr comment and the flink issue itself – yo
+1 Sounds like a good idea.
On Wed, Oct 4, 2023 at 5:04 PM Gyula Fóra wrote:
> I will share my initial implementation soon, it seems to be pretty
> straightforward.
>
> Biggest challenge so far is setting tests so we can still compile against
> older versions but have tests for records . But I h
I will share my initial implementation soon, it seems to be pretty
straightforward.
Biggest challenge so far is setting tests so we can still compile against
older versions but have tests for records . But I have working proposal for
that as well.
Gyula
On Wed, 4 Oct 2023 at 16:45, Chesnay Schep
> If not, what is the difference between the spare resources and redundant
taskmanagers?
I wasn't aware of this one; good catch! The main difference is that you
don't express the spare resources in terms of slots but in terms of task
managers. Also, those options serve slightly different purpose,
Kryo isn't required for this; newer versions do support records but we
want something like a PojoSerializer for records to be performant.
The core challenges are
a) detecting records during type extraction
b) ensuring parameters are passed to the constructor in the right order.
From what I reme
There is no "monolithic" flink-shaded dependency.
Connectors shouldn't depend on anything that Flink provides, but be
self-contained as Martijn pointed out.
Connectors shouldn't depend on flink-shaded.
The overhead and/or risks of doing/supporting that right now far
outweigh the benefits.
( Be
Timo Walther created FLINK-33183:
Summary: Enable metadata columns in NduAnalyzer with retract if
non-virtual
Key: FLINK-33183
URL: https://issues.apache.org/jira/browse/FLINK-33183
Project: Flink
+1 This would be great
On Wed, Oct 4, 2023 at 7:04 AM Gyula Fóra wrote:
> Hi All!
>
> Flink 1.18 contains experimental Java 17 support but it misses out on Java
> records which can be one of the nice benefits of actually using newer java
> versions.
>
> There is already a Jira to track this feat
Timo Walther created FLINK-33182:
Summary: Allow metadata columns in NduAnalyzer with
ChangelogNormalize
Key: FLINK-33182
URL: https://issues.apache.org/jira/browse/FLINK-33182
Project: Flink
Hey, this has been an interesting discussion -- this is something that
has been on my mind as an open source contributor and committer (I'm
not a Flink committer).
A large number of open PRs doesn't _necessarily_ mean a project is
unhealthy or has technical debt. If it's fun and easy to get your
c
Hi All!
Flink 1.18 contains experimental Java 17 support but it misses out on Java
records which can be one of the nice benefits of actually using newer java
versions.
There is already a Jira to track this feature [1] but I am not aware of any
previous efforts so far.
Since records have pretty s
Hi Team,
In my previous email[1] I have described our challenges migrating the
existing Iceberg SinkFunction based implementation, to the new SinkV2 based
implementation.
As a result of the discussion around that topic, I have created the first
[2] of the FLIP-s addressing the missing features th
Hi,
To add I agree with Martijn’s insights; I think we are saying similar things.
To progress agreed upon work, and not blanket close all stale prs,
Kind regards, David.
From: David Radley
Date: Wednesday, 4 October 2023 at 10:59
To: dev@flink.apache.org
Subject: [EXTERNAL] RE: Close orph
Hi ,
I agree Venkata this issue is bigger than closing out stale prs.
We can see that issues are being raised at a rate way above the resolution
time.
https://issues.apache.org/jira/secure/ConfigureReport.jspa?projectOrFilterId=project-12315522&periodName=daily&daysprevious=90&cumulative=true&ve
Khanh Vu created FLINK-33181:
Summary: Table using `kinesis` connector can not be used for both
read & write operations if it's defined with unsupported sink property
Key: FLINK-33181
URL: https://issues.apache.org/ji
38 matches
Mail list logo