+1(non-binding)
- Started a local Flink 1.18 cluster, read and wrote with Kafka and Upsert
Kafka connector successfully to Kafka 2.2 cluster
One minor question: should we update the dependency manual of these two
documentations[1][2]?
[1]
https://nightlies.apache.org/flink/flink-docs-master/d
Thanks Venkatakrishnan for the feedback.
Taking MySQL as an example, if the pushed-down filter does not hit an index, it
will result in a full table scan.
For a table with a large amount of data, a full table scan can consume a
significant amount of CPU resources,
increase response time, hold c
Thanks for the proposal, Jiabao.
I agree with Becket if a *Source* is implementing the *SupportsXXXPushDown*
(in this case *SupportsFilterPushdown*) interface, then the *Source* (in
your FLIP example which is a database) is designed to support filter
pushdown. The corresponding Source can have mec
Hi Dong and Xuannan,
Thanks for your proposal! Processing time temporal join is a very important
feature, the proper implementation of which users have been waiting for a
long time.
However, I am wondering whether it is worth enhancing Watermarks and
related classes in order to support this feat
Hi Tawfek,
Thanks for sharing. I am trying to understand what exact real-life problem
you are tackling with this approach. My understanding from skimming through
the paper is that you are concerned about some outlier event producers from
which the events can be delayed beyond what is expected in t
Dear Apache Flink Development Team,
I hope this email finds you well. I propose an exciting new feature for Apache
Flink that has the potential to significantly enhance its capabilities in
handling unbounded streams of events, particularly in the context of event-time
windowing.
As you may be
Thanks Max !
Le 26/10/2023 à 15:44, Maximilian Michels a écrit :
Have a great time off, Etienne!
On Thu, Oct 26, 2023 at 3:38 PM Etienne Chauchot wrote:
Hi,
FYI, I'll be off and unresponsive for a week starting tomorrow evening.
For ongoing work, please ping me before tomorrow evening or wit
Hi Martijn,
Thanks for the link. I suspect I cannot be the release manager, as I do not
have the required access, but am happy to help this progress, kind
regards, David.
From: Martijn Visser
Date: Friday, 27 October 2023 at 12:16
To: dev@flink.apache.org
Subject: [EXTERNAL] Re: flink-
Thanks to everyone who participated in this release!
Best
Yun Tang
From: Matthias Pohl
Sent: Friday, October 27, 2023 17:23
To: dev@flink.apache.org
Subject: Re: [ANNOUNCE] Apache Flink 1.18.0 released
Thanks to everyone who was involved and especially to the 1.
Dear developers,
FLIP-373 [1] has been accepted and voted through this thread [2].
The proposal received nine approving votes, five of which are binding, and
there is no disapproval.
Benchao Li (binding)
Lincoln Lee (binding)
Liu Ron (binding)
Jark Wu (binding)
Sergey Nuyanzin (binding)
Jiabao S
david radley created FLINK-33384:
Summary: MySQL JDBC driver is deprecated
Key: FLINK-33384
URL: https://issues.apache.org/jira/browse/FLINK-33384
Project: Flink
Issue Type: Improvement
> if you strip the magic byte, and the schema has
> evolved when you're consuming it from Flink,
> you can end up with deserialization errors given
> that a field might have been deleted/added/
> changed etc.
Aren’t we already fairly dependent on the schema remaining consistent, because
otherwise
Hi Dale,
I'm struggling to understand in what cases you want to read data
serialized in connection with Confluent Schema Registry, but can't get
access to the Schema Registry service. It seems like a rather exotic
situation and it beats the purposes of using a Schema Registry in the
first place? I
TLDR:
We currently require a connection to a Confluent Schema Registry to be able to
work with Confluent Avro data. With a small modification to the Avro formatter,
I think we could also offer the ability to process this type of data without
requiring access to the schema registry.
What would p
Matthias Pohl created FLINK-33383:
-
Summary: flink-quickstart-scala is not supported anymore since 1.17
Key: FLINK-33383
URL: https://issues.apache.org/jira/browse/FLINK-33383
Project: Flink
# problem_3.py
# call to .where() after .map() with pandas type function
# also resets column names
# and doesn't really filter values
import pandas as pd
t_env = TableEnvironment.create(EnvironmentSettings.in_streaming_mode())
table = t_env.from_elements(
elements=[
(1, 'China'),
I'll copy the problems here if your prefer that.
# problem_1.py
# add_columns() resets column names to default names f0, f1, ..., fN
t_env = TableEnvironment.create(EnvironmentSettings.in_streaming_mode())
table = t_env.from_elements(
elements=[
(1, '{"name": "Flink"}'),
(2,
# problem_2.py
# .alias() does not work either
import json
t_env = TableEnvironment.create(EnvironmentSettings.in_streaming_mode())
table = t_env.from_elements(
elements=[
(1, '{"name": "Flink"}'),
(2, '{"name": "hello"}'),
(3, '{"name": "world"}'),
(4, '{"na
Hi David,
The release process for connector is documented at
https://cwiki.apache.org/confluence/display/FLINK/Creating+a+flink-connector+release
Best regards,
Martijn
On Fri, Oct 27, 2023 at 12:00 PM David Radley wrote:
>
> Hi Jing,
> I just spotted the mailing list that it is a regression –
Hi everyone,
Python Table API seems to be a little bit buggy.
Some minimal examples of strange behaviors here:
https://gist.github.com/nrdhm/88322a68fc3e9a14a5f4ab6ec13403cf
Was testing in pyflink-shell in our small cluster with Flink 1.17.
Docker image: flink:1.17.1-scala_2.12-java11
The
Distribute by in DML is also supported by Hive.
And it is also useful for flink.
Users can use this ability to increase cache hit rate in lookup join.
And users can use "distribute by key, rand(1, 10)” to avoid data skew problem.
And I think it is another way to solve this Flip204[1]
There is alrea
Hi Jing,
I just spotted the mailing list that it is a regression – I agree it is a
blocker,
Kind regards, David.
From: David Radley
Date: Friday, 27 October 2023 at 10:33
To: dev@flink.apache.org
Subject: [EXTERNAL] RE: flink-sql-connector-jdbc new release
Hi Jing,
thanks are there any p
Hi Becket,
I checked the history of "
*table.optimizer.source.predicate-pushdown-enabled*",
it seems it was introduced since the legacy FilterableTableSource interface
which might be an experiential feature at that time. I don't see the
necessity
of this option at the moment. Maybe we can deprecat
Hi Jing,
thanks are there any processes documented around getting a release out. Out of
interest what is your thinking around this being a blocker? I suspect it is not
a regression, but a really nice to have, WDYT,
Either way it looks interesting – I am going to have a look into this issue to
tr
Christos Hadjinikolis created FLINK-33382:
-
Summary: Flink Python Environment Manager Fails with Pip
--install-option in Recent Pip Versions
Key: FLINK-33382
URL: https://issues.apache.org/jira/browse/FLIN
Thanks to everyone who was involved and especially to the 1.18 release
managers. :)
On Fri, Oct 27, 2023 at 9:13 AM Yuepeng Pan wrote:
> Thanks for the great work! Congratulations to everyone involved!
>
>
> Best,
> Yuepeng Pan
>
> At 2023-10-27 15:06:40, "ConradJam" wrote:
> >Congratulations!
+1 from my side for Lincoln, Yun Tang, Jing and Martijn as release managers.
Thanks everyone for volunteering.
I tried to collect the different tasks that are part of release management
in [1]. It might help to identify responsibilities. Feel free to have a
look and/or update it. Ideally, it will
yunfan created FLINK-33381:
--
Summary: Support split big parquet file to multi InputSplits
Key: FLINK-33381
URL: https://issues.apache.org/jira/browse/FLINK-33381
Project: Flink
Issue Type: Improveme
Jiabao Sun created FLINK-33380:
--
Summary: Bump flink version on flink-connectors-mongodb
Key: FLINK-33380
URL: https://issues.apache.org/jira/browse/FLINK-33380
Project: Flink
Issue Type: Improv
Hi Timo,
Thanks for starting this discussion. I really like it!
The FLIP is already in good shape, I only have some minor comments.
1. Could we also support HASH and RANGE distribution kind on the DDL
syntax?
I noticed that HASH and UNKNOWN are introduced in the Java API, but not in
the syntax.
Yubin Li created FLINK-33379:
Summary: Bump flink version on flink-connectors-elasticsearch
Key: FLINK-33379
URL: https://issues.apache.org/jira/browse/FLINK-33379
Project: Flink
Issue Type: Impr
João Boto created FLINK-33378:
-
Summary: Bump flink version on flink-connectors-jdbc
Key: FLINK-33378
URL: https://issues.apache.org/jira/browse/FLINK-33378
Project: Flink
Issue Type: Improvement
Thanks for the great work! Congratulations to everyone involved!
Best,
Yuepeng Pan
At 2023-10-27 15:06:40, "ConradJam" wrote:
>Congratulations!
>
>Jingsong Li 于2023年10月27日周五 13:55写道:
>
>> Congratulations!
>>
>> Thanks Jing and other release managers and all contributors.
>>
>> Best,
>> Jingson
Congratulations!
Jingsong Li 于2023年10月27日周五 13:55写道:
> Congratulations!
>
> Thanks Jing and other release managers and all contributors.
>
> Best,
> Jingsong
>
> On Fri, Oct 27, 2023 at 1:52 PM Zakelly Lan wrote:
> >
> > Congratulations and thank you all!
> >
> >
> > Best,
> > Zakelly
> >
> > O
34 matches
Mail list logo