Apologies from the delayed response on my side.
I think the authentication module is not part of our plan in 1.17 because
> of the busy work. I think we'll start the design at the end of the
> release-1.17.
Is there a possibility for us to get engaged and at least introduce initial
changes to s
+1 (non-binding)
Thanks for driving this, Danny!
Hong
On 26/10/2022, 08:14, "Martijn Visser" wrote:
CAUTION: This email originated from outside of the organization. Do not
click links or open attachments unless you can confirm the sender and know the
content is safe.
+1 binding
Hangxiang Yu created FLINK-29775:
Summary: [JUnit5 Migration] Module: flink-statebackend-rocksdb
Key: FLINK-29775
URL: https://issues.apache.org/jira/browse/FLINK-29775
Project: Flink
Issue T
Hangxiang Yu created FLINK-29776:
Summary: [JUnit5 Migration] Module: flink-statebackend-changelog
Key: FLINK-29776
URL: https://issues.apache.org/jira/browse/FLINK-29776
Project: Flink
Issue
Hangxiang Yu created FLINK-29777:
Summary: [JUnit5 Migration] Module: flink-dstl
Key: FLINK-29777
URL: https://issues.apache.org/jira/browse/FLINK-29777
Project: Flink
Issue Type: Sub-task
Hi All
Our use case is that we need to process elements for the same key
sequentially, and this processing involves async operations.
If any part of the processing fails, we store the offending and all
subsequent incoming messages for that key in the state and not process any
further messages for
Hi Martijn,
I agree with your opinion, one must consider it well whether it's a good
tradeoff.
In my view adding an extra load directory would worth because it's
relatively small change/risk (+ structural solution is going this direction)
but enforcing HDFS lib is not such good bandaid.
No matter
Hi, all
Recently I checked our repo and founded there are some useless branches[1][2],
maybe they were pushed accidentally or experimental purpose.
I would like to suggest removing them to keep the code branch clean, how do you
think?
The following branches could be safely deleted from my si
hcj created FLINK-29778:
---
Summary: fix error in flink-1.13.md
Key: FLINK-29778
URL: https://issues.apache.org/jira/browse/FLINK-29778
Project: Flink
Issue Type: Bug
Components: Documentation
Thanks Jacky Lau for starting this discussion.
I understand that you are trying to find a convenient way to specify
dependency jars along with user jar. However,
let's try to narrow down by differentiating deployment modes.
# Standalone mode
No matter you are using the standalone mode on virtual
Thanks for bringing that up. FLINK-29638-1.15 and FLINK-29638-1.16 are
definitely pushed accidentally. Sorry for missing that. I will delete
those.
Not sure whether there was some intention behind the *gha* branches because
the effort on that Github Actions is kind of paused right now. But I guess,
Hi Leonard,
Thanks for driving this, +1 for removing useless branches, this would make
the git tree cleaner.
And first we need to identify which branches are really useless.
Leonard Xu 于2022年10月27日周四 16:12写道:
> Hi, all
>
> Recently I checked our repo and founded there are some useless
> branch
Robert Metzger created FLINK-29779:
--
Summary: Allow using MiniCluster with a PluginManager to use
metrics reporters
Key: FLINK-29779
URL: https://issues.apache.org/jira/browse/FLINK-29779
Project: Fl
Hi Dev,
I'd like to start a discussion about removing FlinkKafkaConsumer and
FlinkKafkaProducer in 1.17.
Back in the past, it was originally announced to remove it with Flink 1.15
after Flink 1.14 had been released[1]. And then postponed to the next 1.15
release which meant to remove it with Flin
I would like to bring this topic up one more time. I put some more thought
into it and created FLIP-270 [1] as a follow-up of FLIP-194 [2] with an
updated version of what I summarized in my previous email. It would be
interesting to get some additional perspectives on this; more specifically,
the t
Hi Jing,
Thanks for opening the discussion. I see no issue with removing the
FlinkKafkaConsumer, since it has been marked as deprecated and the Source
API (which is used by the KafkaSource) is marked as @Public (at least the
Base implementation)
The successor of the FlinkKafkaProducer is the Kafk
qingwei zhong created FLINK-29780:
-
Summary: how to persist flink table like spark persist dataset
Key: FLINK-29780
URL: https://issues.apache.org/jira/browse/FLINK-29780
Project: Flink
Issue
Hi,
I would like to propose a solution to this JIRA issue. I looked at the
comments and there was some guidance around where in the code we should
update to allow for this behaviour. But I believe there are still two
questions that remain open:
1. Is this expected behaviour (i.e. users should
lincoln lee created FLINK-29781:
---
Summary: ChangelogNormalize uses wrong keys after transformation
by WatermarkAssignerChangelogNormalizeTransposeRule
Key: FLINK-29781
URL: https://issues.apache.org/jira/browse/FLI
K Bharath created FLINK-29782:
-
Summary: Bash-Java-utils.jar still uses Log4j 2.16.0
Key: FLINK-29782
URL: https://issues.apache.org/jira/browse/FLINK-29782
Project: Flink
Issue Type: Improvement
Do not remove benchmark_request, exp_github_actions and experiment_gha_docs.
The rest can be deleted AFAICT.
On 27/10/2022 10:12, Leonard Xu wrote:
Hi, all
Recently I checked our repo and founded there are some useless branches[1][2],
maybe they were pushed accidentally or experimental purpos
Thanks Matthias and Chesnay for the quick ACK.
I’ve deleted following branches.
> FLINK-29638-1.15
> FLINK-29638-1.16
> 28733
> revert-16606-materialization_on_runtime
> release0 Updated 2 years ago
> docs_experimental__docsUpdated 2 y
Gabor Somogyi created FLINK-29783:
-
Summary: Flaky test:
KafkaShuffleExactlyOnceITCase.testAssignedToPartitionFailureRecoveryEventTime
Key: FLINK-29783
URL: https://issues.apache.org/jira/browse/FLINK-29783
Sergey Nuyanzin created FLINK-29784:
---
Summary: Build fails with There is at least one incompatibility:
org.apache.flink.api.connector.source.SourceReader
Key: FLINK-29784
URL: https://issues.apache.org/jira/brow
Hey Gang,
What I'm looking for here is a complete picture of why the change is
necessary and what the next steps are. Ultimately, refactoring any code
serves a purpose. Here, we want to refactor the Coordinator code such that
we can add a SinkCoordinator, similar to the SourceCoordinator. The FLIP
Hi Max,
I got your concern. Since shuffling support for Flink Iceberg sink is not
the main body of the proposal, I add another appendix part just now with
more details about how to use CoordinatorContextBase and how to define
ShufflingCoordinator.
Let me know if that cannot solve your concern.
Th
Elkhan Dadashov created FLINK-29785:
---
Summary: Upgrade Flink Elasticsearch-7 connector
elasticsearch.version to 7.17.0
Key: FLINK-29785
URL: https://issues.apache.org/jira/browse/FLINK-29785
Project
Hello all,
Want to help upgrade the Flink ElasticSearch connector to 8.X.X version.
`elasticsearch-7` connector uses ElasticSearch `7.10.2` version.
https://github.com/apache/flink-connector-elasticsearch/blob/main/flink-connector-elasticsearch7/pom.xml#L39
When the ElasticSearch server side is
Hi Gang,
Looks much better! I've actually gone through the OperatorCoordinator code.
It turns out, any operator already has an OperatorCoordinator assigned.
Also, any operator can add custom coordinator code. So it looks like you
won't have to implement any additional runtime logic to add a
Shuffl
Hi everyone,
I'd like to start the vote for FLIP-263 [1].
Thanks for your feedback and the discussion in [2][3].
The vote will be open for at least 72 hours.
Best regards,
Hangxiang.
[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-263%3A+Improve+resolving+schema+compatibility
[2] ht
Jiang Xin created FLINK-29786:
-
Summary: VarianceThresholdSelector Uses FeaturesCol as Input Param
Key: FLINK-29786
URL: https://issues.apache.org/jira/browse/FLINK-29786
Project: Flink
Issue Typ
For an empty array, seems different engine use different data type:
Hive: string
Spark: string ?
Trino: Unknown
BigQuery: Integer
I have tried with Hive and Spark, but haven't tried with Trino and BigQuery.
I'm a little of doubt about the spark's behavior. But from my sides, seems
Spark actuall
hi Martijn:
Some platform users may not package all the jars to the fat jars, spark
also has - jars for dependencies
https://stackoverflow.com/questions/29099115/spark-submit-add-multiple-jars-in-classpath
On 2022/10/27 06:48:52 Martijn Visser wrote:
> Hi Jacky Lau,
>
> Since you've sent the email
Yubin Li created FLINK-29787:
Summary: fix ci METHOD_NEW_DEFAULT issue
Key: FLINK-29787
URL: https://issues.apache.org/jira/browse/FLINK-29787
Project: Flink
Issue Type: Bug
Components:
+1(non-binding) and thanks for Hangxiang's driving.
Hangxiang Yu 于2022年10月28日周五 09:24写道:
> Hi everyone,
>
> I'd like to start the vote for FLIP-263 [1].
>
> Thanks for your feedback and the discussion in [2][3].
>
> The vote will be open for at least 72 hours.
>
> Best regards,
> Hangxiang.
>
+1 (binding)
Thanks for driving this.
Best
Yuan
On Fri, Oct 28, 2022 at 11:17 AM yanfei lei wrote:
> +1(non-binding) and thanks for Hangxiang's driving.
>
>
>
> Hangxiang Yu 于2022年10月28日周五 09:24写道:
>
> > Hi everyone,
> >
> > I'd like to start the vote for FLIP-263 [1].
> >
> > Thanks for your
Huang Xingbo created FLINK-29788:
Summary: StatefulJobWBroadcastStateMigrationITCase failed in
native savepoints
Key: FLINK-29788
URL: https://issues.apache.org/jira/browse/FLINK-29788
Project: Flink
Hey Leonard,
Thanks for your efforts to clean up our repo!
Best
Yuan
On Thu, Oct 27, 2022 at 11:55 PM Leonard Xu wrote:
> Thanks Matthias and Chesnay for the quick ACK.
>
> I’ve deleted following branches.
> > FLINK-29638-1.15
> > FLINK-29638-1.16
> > 28733
> > revert-16606-materialization_o
Max,
Thanks a lot for the comments. We should clarify that the shuffle
operator/coordinator is not really part of the Flink sink
function/operator. shuffle operator is a custom operator that can be
inserted right before the Iceberg writer operator. Shuffle operator
calculates the traffic statistic
Hi Hangxiang,
The current plan looks good to me, +1 (non-binding). Thanks for driving this.
Best,
Zakelly
On Fri, Oct 28, 2022 at 11:18 AM Yuan Mei wrote:
>
> +1 (binding)
>
> Thanks for driving this.
>
> Best
> Yuan
>
> On Fri, Oct 28, 2022 at 11:17 AM yanfei lei wrote:
>
> > +1(non-binding)
Sopan Phaltankar created FLINK-29789:
Summary: Fix flaky tests in CheckpointCoordinatorTest
Key: FLINK-29789
URL: https://issues.apache.org/jira/browse/FLINK-29789
Project: Flink
Issue Ty
+1 (binding)
Thanks Hangxiang for driving the FLIP.
Best,
Yun Gao
--Original Mail --
Sender:Zakelly Lan
Send Date:Fri Oct 28 12:27:01 2022
Recipients:Flink Dev
Subject:Re: [VOTE] FLIP-263: Improve resolving schema compatibility
Hi Hangxiang,
The current pla
Hi.
> Is there a possibility for us to get engaged and at least introduce
initial changes to support authentication/authorization?
Yes. You can write a FLIP about the design and change. We can discuss this
in the dev mail. If the FLIP passes, we can develop it together.
> Another question about
+1 (binding)
Thanks for driving this!
Best,
Godfrey
Yun Gao 于2022年10月28日周五 13:50写道:
>
> +1 (binding)
>
> Thanks Hangxiang for driving the FLIP.
>
> Best,
> Yun Gao
>
>
>
>
> --Original Mail --
> Sender:Zakelly Lan
> Send Date:Fri Oct 28 12:27:01 2022
> Recipien
Jane Chan created FLINK-29790:
-
Summary: Table Store Catalog implement getDatabase
Key: FLINK-29790
URL: https://issues.apache.org/jira/browse/FLINK-29790
Project: Flink
Issue Type: Improvement
The Apache Flink community is very happy to announce the release of Apache
Flink 1.16.0, which is the first release for the Apache Flink 1.16 series.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applicat
46 matches
Mail list logo