Chirag Dewan created FLINK-33367:
Summary: Invalid Check in DefaultFileFilter
Key: FLINK-33367
URL: https://issues.apache.org/jira/browse/FLINK-33367
Project: Flink
Issue Type: Bug
macdoor615 created FLINK-33366:
--
Summary: can not accept statement "EXECUTE STATEMENT SET BEGIN"
Key: FLINK-33366
URL: https://issues.apache.org/jira/browse/FLINK-33366
Project: Flink
Issue Typ
macdoor615 created FLINK-33365:
--
Summary: Missing filter condition in execution plan containing
lookup join with mysql jdbc connector
Key: FLINK-33365
URL: https://issues.apache.org/jira/browse/FLINK-33365
+1 (binding)
Thanks,
Zhu
Yuepeng Pan 于2023年10月25日周三 11:32写道:
> +1 (non-binding)
>
> Regards,
> Yuepeng Pan
>
> On 2023/10/23 08:25:30 xiangyu feng wrote:
> > Thanks for driving that.
> > +1 (non-binding)
> >
> > Regards,
> > Xiangyu
> >
> > Yu Chen 于2023年10月23日周一 15:19写道:
> >
> > > +1 (non-bin
Junrui Li created FLINK-33364:
-
Summary: Introduce standard YAML for flink configuration
Key: FLINK-33364
URL: https://issues.apache.org/jira/browse/FLINK-33364
Project: Flink
Issue Type: Sub-tas
+1 (binding)
- Verified signature and checksum
- Verified that no binary exists in the source archive
- Built from source with Java 8 using -Dflink.version=1.18
- Started a local Flink 1.18 cluster, submitted jobs with SQL client
reading from and writing (with exactly-once) to Kafka 3.2.3 cluster
Thanks for the proposal, Jiabao. My two cents below:
1. If I understand correctly, the motivation of the FLIP is mainly to make
predicate pushdown optional on SOME of the Sources. If so, intuitively the
configuration should be Source specific instead of general. Otherwise, we
will end up with gene
Henning Schmiedehausen created FLINK-33363:
--
Summary: docker bases images can't run the java compiler
Key: FLINK-33363
URL: https://issues.apache.org/jira/browse/FLINK-33363
Project: Flink
Hi everyone,
Please review and vote on release candidate #1 for version 3.0.1 of the
Apache Flink Kafka Connector, as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)
This release contains important changes for the following:
- Supports Fl
Hi,
I notice another pr,
https://github.com/apache/flink/pull/23594/files#diff-9c5fb3d1b7e3b0f54bc5c4182965c4fe1f9023d449017cece3005d3f90e8e4d8
is going to cause a conflict again, unless this is merged,
Are we ok to merge https://github.com/apache/flink/pull/23469 – so I do not
need to resolve
Hi Matthias,
That sounds reasonable,
Kind regards, David
From: Matthias Pohl
Date: Monday, 23 October 2023 at 16:41
To: dev@flink.apache.org
Subject: [EXTERNAL] Re: Maven and java version variables
Hi David,
The change that caused the conflict in your PR is caused by FLINK-33291
[1]. I was t
Looks good to me +1
From: Ryan Skraba
Date: Wednesday, 25 October 2023 at 17:19
To: dev@flink.apache.org
Subject: [EXTERNAL] [VOTE] Add JSON encoding to Avro serialization
Hello!
I'm reviewing a new feature of another contributor (Dale Lane) on
FLINK-33058 that adds JSON-encoding in addition to
Hi,
I'd like to propose adding a PyFlink channel to the Apache Flink slack
workspace.
By creating a channel focussed on this it will help people find previous
discussions as well as target new discussions and questions to the correct
place. PyFlink is a sufficiently distinct component to make a d
Hello!
I'm reviewing a new feature of another contributor (Dale Lane) on
FLINK-33058 that adds JSON-encoding in addition to the binary Avro
serialization format. He addressed my original objections that JSON
encoding isn't _generally_ a best practice for Avro messages.
The discussion is pretty w
Thanks Jane for the detailed explanation.
I think that for users, we should respect conventions over configurations.
Conventions can be default values explicitly specified in configurations, or
they can be behaviors that follow previous versions.
If the same code has different behaviors in diffe
Hi Jiabao,
Thanks for the in-depth clarification. Here are my cents
However, "table.optimizer.source.predicate-pushdown-enabled" and
> "scan.filter-push-down.enabled" are configurations for different
> components(optimizer and source operator).
>
We cannot assume that every user would be interes
ConradJam created FLINK-33362:
-
Summary: Document Externalized Declarative Resource Management
With Chinese
Key: FLINK-33362
URL: https://issues.apache.org/jira/browse/FLINK-33362
Project: Flink
Hi Gordon,
Thanks for the review, here are my thoughts:
> In terms of the abstraction layering, I was wondering if you've also
considered this approach which I've quickly sketched in my local fork:
https://github.com/tzulitai/flink/commit/e84e3ac57ce023c35037a8470fefdfcad877bcae
I think we have
Martijn Visser created FLINK-33361:
--
Summary: Add Java 17 compatibility to Flink Kafka consumer
Key: FLINK-33361
URL: https://issues.apache.org/jira/browse/FLINK-33361
Project: Flink
Issue T
Feng Jiajie created FLINK-33360:
---
Summary: HybridSource fails to clear the previous round's state
when switching sources, leading to data loss
Key: FLINK-33360
URL: https://issues.apache.org/jira/browse/FLINK-33360
Thanks Hang and Lincoln for the good point.
'source.predicate-pushdown.enabled’ is great to me. I have changed the proposal
document.
Do we need to maintain consistency in hyphen-separated naming style between
'source.predicate-pushdown-enabled' and
'table.optimizer.source.predicate-pushdown-e
+1 (binding)
On Wed, Oct 25, 2023 at 2:03 PM liu ron wrote:
> +1(binding)
>
> Best,
> Ron
>
> Jark Wu 于2023年10月25日周三 19:52写道:
>
> > +1 (binding)
> >
> > Best,
> > Jark
> >
> > On Wed, 25 Oct 2023 at 16:27, Jiabao Sun .invalid>
> > wrote:
> >
> > > Thanks Jane for driving this.
> > >
> > > +1 (
Hi, all,
Thanks for the lively discussion.
I agree with Jiabao. I think enabling "scan.filter-push-down.enabled"
relies on enabling "table.optimizer.source.predicate-pushdown-enabled".
It is a little strange that the planner still needs to push down the
filters when we set "scan.filter-push-down.
+1(binding)
Best,
Ron
Jark Wu 于2023年10月25日周三 19:52写道:
> +1 (binding)
>
> Best,
> Jark
>
> On Wed, 25 Oct 2023 at 16:27, Jiabao Sun
> wrote:
>
> > Thanks Jane for driving this.
> >
> > +1 (non-binding)
> >
> > Best,
> > Jiabao
> >
> >
> > > 2023年10月25日 16:22,Lincoln Lee 写道:
> > >
> > > +1 (bin
+1 (binding)
Best,
Jark
On Wed, 25 Oct 2023 at 16:27, Jiabao Sun
wrote:
> Thanks Jane for driving this.
>
> +1 (non-binding)
>
> Best,
> Jiabao
>
>
> > 2023年10月25日 16:22,Lincoln Lee 写道:
> >
> > +1 (binding)
> >
> > Best,
> > Lincoln Lee
> >
> >
> > Zakelly Lan 于2023年10月23日周一 14:15写道:
> >
> >>
Thanks Benchao for the feedback.
I understand that the configuration of global parallelism and task parallelism
is at different granularities but with the same configuration.
However, "table.optimizer.source.predicate-pushdown-enabled" and
"scan.filter-push-down.enabled" are configurations for
Thank you all for the lively discussion!
Agree with Benchao that from a user's (rather than a developer's) point of
view, it's easier to understand that fine-grained options override global
options.
In addition, for the new option 'scan.filter-push-down.enabled', would it
be
better to keep the na
Thank you David!
I currently only see the 1.5.0 version as the latest, but I will check back
again later.
Cheers,
Gyula
On Wed, Oct 25, 2023 at 11:17 AM David Radley
wrote:
> Hi,
> Fyi with some expert direction from James Busche, I have published the 1.6
> OLM and operatorhub.io versions of
Hi,
Fyi with some expert direction from James Busche, I have published the 1.6 OLM
and operatorhub.io versions of the Flink operator. When 1.6.1 is out I will do
the same again,
Kind regards, David.
From: Gyula Fóra
Date: Tuesday, 10 October 2023 at 13:27
To: dev@flink.apache.org
S
Thanks Jiabao for the detailed explanations, that helps a lot, I
understand your rationale now.
Correct me if I'm wrong. Your perspective is from "developer", which
means there is an optimizer and connector component, and if we want to
enable this feature (pushing filters down into connectors), yo
Hi Thomas,
Thanks for your verification and feedback!
I tried to build the flink-kubernetes-operator project with Java 17,
it's really not supported right now.
Offline discussion with Gyula, we hope Kubernetes operator supports
compiling with Java 17 as a critical ticket in 1.7.0. I created the
Rui Fan created FLINK-33359:
---
Summary: Kubernetes operator supports Java 17
Key: FLINK-33359
URL: https://issues.apache.org/jira/browse/FLINK-33359
Project: Flink
Issue Type: Improvement
Nice !
Thank you and everyone involved for the hard work.
Etienne
Le 19/10/2023 à 10:24, Zakelly Lan a écrit :
Hi everyone,
Flink benchmarks [1] generate daily performance reports in the Apache
Flink slack channel (#flink-dev-benchmarks) to detect performance
regression [2]. Those benchmarks
Thanks Jane for driving this.
+1 (non-binding)
Best,
Jiabao
> 2023年10月25日 16:22,Lincoln Lee 写道:
>
> +1 (binding)
>
> Best,
> Lincoln Lee
>
>
> Zakelly Lan 于2023年10月23日周一 14:15写道:
>
>> +1(non-binding)
>>
>> Best,
>> Zakelly
>>
>> On Mon, Oct 23, 2023 at 1:15 PM Benchao Li wrote:
>>>
>
+1 (binding)
Best,
Lincoln Lee
Zakelly Lan 于2023年10月23日周一 14:15写道:
> +1(non-binding)
>
> Best,
> Zakelly
>
> On Mon, Oct 23, 2023 at 1:15 PM Benchao Li wrote:
> >
> > +1 (binding)
> >
> > Feng Jin 于2023年10月23日周一 13:07写道:
> > >
> > > +1(non-binding)
> > >
> > >
> > > Best,
> > > Feng
> > >
>
Thanks Jane for further explanation.
These two configurations correspond to different levels.
"scan.filter-push-down.enabled" does not make
"table.optimizer.source.predicate" invalid.
The planner will still push down predicates to all sources.
Whether filter pushdown is allowed or not is deter
36 matches
Mail list logo