future created FLINK-24401:
--
Summary: TM cannot exit after Metaspace OOM
Key: FLINK-24401
URL: https://issues.apache.org/jira/browse/FLINK-24401
Project: Flink
Issue Type: Bug
Components:
jacky jia created FLINK-24400:
-
Summary: does elasticsearch connector can suport
UpdateByQueryRequest?
Key: FLINK-24400
URL: https://issues.apache.org/jira/browse/FLINK-24400
Project: Flink
Iss
> Apart from this being `@PublicEvolving`
>From my perspective, annotating the 'DynamicTableSink' to be a
'PublicEvolving' class is not reasonable, because that means devs could
just change the basic API which all downstream connectors are depending on
easily when iterating flink from 1.12 to 1.1
>
> I think we have a compile time checks for breaking changes in `@Public`
> marked classes/interfaces using japicmp [1].
Nice, thanks for pointing that out, I'll take a closer look at it ;)
Best,
D.
On Tue, Sep 28, 2021 at 4:14 PM Piotr Nowojski wrote:
> > - We don't have any safeguards for
> - We don't have any safeguards for stable API breaks. Big +1 for Ingo's
effort with architectural tests [3].
I think we have a compile time checks for breaking changes in `@Public`
marked classes/interfaces using japicmp [1].
Piotrek
[1] https://github.com/apache/flink/blob/master/pom.xml#L201
This is a super interesting topic and there is already a great discussion.
Here are few thoughts:
- There is a delicate balance between fast delivery of the new features and
API stability. Even though we should be careful with breaking evolving
interfaces, it shouldn't stop us from making fast pro
Thanks Piotr for the kindly reply, what confused me is that
`SourceReaderContext` was marked @Public when it was born in flink 1.11,
and then it was corrected to @PublicEvolving in 1.11 -_-, and finally it was
changed to @Public again...
As Flink and Flink ecosystem(Flink CDC connectors) develo
Thanks for starting the discussion. I think both issues are valid concerns
that we need to tackle. I guess the biggest issue is that now it's just not
possible to write 1 connector that runs for Flink 1.13 and 1.14, so we make
it much harder for devs in the ecosystem (and our goal is to make it
eas
Timo Walther created FLINK-24399:
Summary: Make handling of DataType less verbose
Key: FLINK-24399
URL: https://issues.apache.org/jira/browse/FLINK-24399
Project: Flink
Issue Type: Improvemen
We already have such tooling via japicmp; it's just that it is only
enabled for Public APIs.
You can probably generate a report via japicmp for all
PublicEvolging/Experimental APIs as well.
On 28/09/2021 15:17, Ingo Bürk wrote:
Hi everyone,
I think it would be best to support this process w
Hi everyone,
I think it would be best to support this process with tooling as much as
possible, because humans are bound to make mistakes. FLINK-24138[1] should
be a first step in this direction, but it wouldn't catch the cases
discussed here.
Maybe we should consider "capturing" the public API in
Hi Leonard,
Sorry for this causing you troubles, however that change in the return type
was done while this class still has been marked as `@PublicEvolving`[1]. As
of 1.13.x `SourceReaderContext` was `@PublicEvolving` and it was marked as
`@Public` only starting from Flink 1.14.0 [2]. Probably wha
>>
>> Not sure if this will happen in 1.15 already. We will needed automated
>> compatibility tests and a well-defined list of stable API.
> We are
> trying to provide forward compatibility: applications using `@Public` APIs
> compiled against Flink 1.12.x, should work fine in Flink 1.13.x
Unfo
Dong Lin created FLINK-24398:
Summary: KafkaSourceFetcherManager should re-use an existing
SplitFetcher to commit offset if possible
Key: FLINK-24398
URL: https://issues.apache.org/jira/browse/FLINK-24398
Hi,
> we find the iceberg-flink-runtime.jar built
> by flink 1.13 cannot works fine in flink 1.12 clusters because of the
basic
> API compatibility was break when iterating flink 1.12 to flink1.13.2:
Apart from this being `@PublicEvolving` one thing to note here is that we
do not guarantee such c
I'm happy to announce that we have unanimously approved this release.
There are 7 approving votes, 3 of which are binding:
* Xintong Song (binding)
* Zhu Zhu (binding)
* Piotr Nowojski (binding)
* Yangze Guo
* Jing Zhang
* Matthias Pohl
* Leonard Xu
There are no disapproving vote
Fabian Paul created FLINK-24397:
---
Summary: Reduce TableSchema usage in Table API connectors
Key: FLINK-24397
URL: https://issues.apache.org/jira/browse/FLINK-24397
Project: Flink
Issue Type: Im
+1 (non-binding)
- verified signatures and hashsums
- started a cluster, ran a wordcount job, the result is expected, no suspicious
log output
- started SQL Client, ran some sql queries in SQL Client, the result is
expected
- the web PR looks good
Best,
Leonard
> 在 2021年9月28日,17:26,Matthi
Thank you all for helping to verify the release. Really appreciated! I
will conclude the vote in a separate thread.
Best,
Dawid
On 28/09/2021 11:26, Matthias Pohl wrote:
> +1 (non-binding)
>
> * verified checksums
> * built binaries from sources
> * executed example job and played around with it
I opened https://issues.apache.org/jira/browse/FLINK-24396 to track this
effort.
Not sure if this will happen in 1.15 already. We will needed automated
compatibility tests and a well-defined list of stable API.
We can also do this incrementally and start with the interfaces for
connectors.
Timo Walther created FLINK-24396:
Summary: Add @Public annotations to Table & SQL API
Key: FLINK-24396
URL: https://issues.apache.org/jira/browse/FLINK-24396
Project: Flink
Issue Type: New Fe
Robert Metzger created FLINK-24395:
--
Summary: Checkpoint trigger time difference between log statement
and web frontend
Key: FLINK-24395
URL: https://issues.apache.org/jira/browse/FLINK-24395
Project
Marios Trivyzas created FLINK-24394:
---
Summary: Refactor scalar function testing infrastructure to allow
testing multiple columns
Key: FLINK-24394
URL: https://issues.apache.org/jira/browse/FLINK-24394
Marios Trivyzas created FLINK-24393:
---
Summary: Add tests for all currently supported cast combinations
Key: FLINK-24393
URL: https://issues.apache.org/jira/browse/FLINK-24393
Project: Flink
Thanks @peninx for the feedback, this will definitely help the flink community.
Recently, we also developed a series of connectors in Flink CDC project[1].
They are based on flink version 1.13.1, but many users still use flink version
1.12.* in production. They have encountered similar problems,
+1 (non-binding)
* verified checksums
* built binaries from sources
* executed example job and played around with it removing task managers
* checked that both, scala 2.11 and 2.12 artifacts are available
* executed Ververica Platform e2e tests on 1.14.0 RC3
No issues observed.
Best,
Matthias
O
I believe I mentioned this before in the community, we (Zeppelin) use flink
api as well and would like to support multiple versions of flink in one
zeppelin version. For now we have to use reflection to achieve that.
https://github.com/apache/zeppelin/tree/master/flink
OpenInx 于2021年9月28日周二 下午5
Robert Metzger created FLINK-24392:
--
Summary: Upgrade presto s3 fs implementation to Trinio >= 348
Key: FLINK-24392
URL: https://issues.apache.org/jira/browse/FLINK-24392
Project: Flink
Issu
Thanks for the information, Martijin & Timo !
> Since implementing a connector is not straightforward, we were expecting
that not many users implement custom connectors.
Currently, the apache iceberg & hudi are heavily depending on the
PublicEvolving API for their flink connectors. I think apach
Hi Zheng,
I'm very sorry for the inconvenience that we have caused with our API
changes. We are trying our best to avoid API breaking changes. Thanks
for giving us feedback.
There has been a reason why Table API was marked as @PublicEvolving
instead of @Public. Over the last two years, we ha
Shengnan YU created FLINK-24391:
---
Summary: flink-json formats should support an option to extract
the raw message into a column
Key: FLINK-24391
URL: https://issues.apache.org/jira/browse/FLINK-24391
Pr
+1 (binding)
I've checked:
- licenses dependencies that changed between 1.13.0 and 1.14.0 and if the
appropriate notifications are included in the NOTICE file
- blog post
- whether examples are working
Best,
Piotrek
pon., 27 wrz 2021 o 10:43 JING ZHANG napisał(a):
> +1 (non-binding)
> - bu
32 matches
Mail list logo