Hi all,
It was a wonderful discussion.
I think it can be made clear that PublicEvolving must not provide
cross version compatibility, which means that once the connector wants
to use new features, it must not be cross version compatible.
However, the connector must maintain multiple versions. Ma
Hi Arvid,
Yes, reflection can solve this problem on the connector side, but this
increases the burden to connectors.
That also means all the existing released source connectors which use
"metricGroup" will not work on 1.14,
and they have to release a new version. It's just a problem of who changes
Hi Jark,
I also don't see it as a blocker issue at all. If you want to access the
metric group across 1.13 and 1.14, you can use
(MetricGroup) context.getClass().getMethod("metricGroup").invoke(context)
But of course, you will not be able to access the new standardized metrics.
For that you will
Hi,
I don't understand why we are talking about this being a blocker issue? New
sources were not marked as @Public for a good reason until 1.14. I agree,
we should try better at making APIs @Public sooner. I was even proposing to
create strict timeout rules (enforced via some automated checks) lik
Hi Arvid,
> Should we expect connector devs to release different connector binaries
for different Flink minors?
>From the discussion of this thread, I think the answer is obviously "not",
otherwise OpenInx won't start
this discussion. As a maintainer of flink-cdc-connector, I have to say
that it'
The issue is that if we do not mark them as Public, we will always have
incompatibilities. The change of SourceReaderContext#metricGroup is
perfectly fine according to the annotation. The implications that we see
here just mean that the interfaces have been expected to be Public.
And now the quest
Hi,
> [...] but also the new Source/Sink APIs as public
I'm not really involved in the new Source/Sink APIs and will happily listen
to the developers working with them here, but since they are new, we should
also be careful not to mark them as stable too quickly. We've only begun
updating the exi
Hi all,
Nice thread and great discussion! Ecosystem is one of the most important
things
to the Flink community, we should pay more attention to API compatibility.
Marking all connector APIs @Public is a good idea, not only mark the
Table/SQL
connector APIs public, but also the new Source/Sink API
> Apart from this being `@PublicEvolving`
>From my perspective, annotating the 'DynamicTableSink' to be a
'PublicEvolving' class is not reasonable, because that means devs could
just change the basic API which all downstream connectors are depending on
easily when iterating flink from 1.12 to 1.1
>
> I think we have a compile time checks for breaking changes in `@Public`
> marked classes/interfaces using japicmp [1].
Nice, thanks for pointing that out, I'll take a closer look at it ;)
Best,
D.
On Tue, Sep 28, 2021 at 4:14 PM Piotr Nowojski wrote:
> > - We don't have any safeguards for
> - We don't have any safeguards for stable API breaks. Big +1 for Ingo's
effort with architectural tests [3].
I think we have a compile time checks for breaking changes in `@Public`
marked classes/interfaces using japicmp [1].
Piotrek
[1] https://github.com/apache/flink/blob/master/pom.xml#L201
This is a super interesting topic and there is already a great discussion.
Here are few thoughts:
- There is a delicate balance between fast delivery of the new features and
API stability. Even though we should be careful with breaking evolving
interfaces, it shouldn't stop us from making fast pro
Thanks Piotr for the kindly reply, what confused me is that
`SourceReaderContext` was marked @Public when it was born in flink 1.11,
and then it was corrected to @PublicEvolving in 1.11 -_-, and finally it was
changed to @Public again...
As Flink and Flink ecosystem(Flink CDC connectors) develo
Thanks for starting the discussion. I think both issues are valid concerns
that we need to tackle. I guess the biggest issue is that now it's just not
possible to write 1 connector that runs for Flink 1.13 and 1.14, so we make
it much harder for devs in the ecosystem (and our goal is to make it
eas
We already have such tooling via japicmp; it's just that it is only
enabled for Public APIs.
You can probably generate a report via japicmp for all
PublicEvolging/Experimental APIs as well.
On 28/09/2021 15:17, Ingo Bürk wrote:
Hi everyone,
I think it would be best to support this process w
Hi everyone,
I think it would be best to support this process with tooling as much as
possible, because humans are bound to make mistakes. FLINK-24138[1] should
be a first step in this direction, but it wouldn't catch the cases
discussed here.
Maybe we should consider "capturing" the public API in
Hi Leonard,
Sorry for this causing you troubles, however that change in the return type
was done while this class still has been marked as `@PublicEvolving`[1]. As
of 1.13.x `SourceReaderContext` was `@PublicEvolving` and it was marked as
`@Public` only starting from Flink 1.14.0 [2]. Probably wha
>>
>> Not sure if this will happen in 1.15 already. We will needed automated
>> compatibility tests and a well-defined list of stable API.
> We are
> trying to provide forward compatibility: applications using `@Public` APIs
> compiled against Flink 1.12.x, should work fine in Flink 1.13.x
Unfo
Hi,
> we find the iceberg-flink-runtime.jar built
> by flink 1.13 cannot works fine in flink 1.12 clusters because of the
basic
> API compatibility was break when iterating flink 1.12 to flink1.13.2:
Apart from this being `@PublicEvolving` one thing to note here is that we
do not guarantee such c
I opened https://issues.apache.org/jira/browse/FLINK-24396 to track this
effort.
Not sure if this will happen in 1.15 already. We will needed automated
compatibility tests and a well-defined list of stable API.
We can also do this incrementally and start with the interfaces for
connectors.
Thanks @peninx for the feedback, this will definitely help the flink community.
Recently, we also developed a series of connectors in Flink CDC project[1].
They are based on flink version 1.13.1, but many users still use flink version
1.12.* in production. They have encountered similar problems,
I believe I mentioned this before in the community, we (Zeppelin) use flink
api as well and would like to support multiple versions of flink in one
zeppelin version. For now we have to use reflection to achieve that.
https://github.com/apache/zeppelin/tree/master/flink
OpenInx 于2021年9月28日周二 下午5
Thanks for the information, Martijin & Timo !
> Since implementing a connector is not straightforward, we were expecting
that not many users implement custom connectors.
Currently, the apache iceberg & hudi are heavily depending on the
PublicEvolving API for their flink connectors. I think apach
Hi Zheng,
I'm very sorry for the inconvenience that we have caused with our API
changes. We are trying our best to avoid API breaking changes. Thanks
for giving us feedback.
There has been a reason why Table API was marked as @PublicEvolving
instead of @Public. Over the last two years, we ha
Hi Zheng,
Thanks for reaching out and sharing your frustration. No feelings are hurt
and feedback is always welcome, because that's the only way we can improve
for the future. API compatibility is a really important thing for us while
also improving and building new capabilities. Let me investigat
Sorry about my unfriendly tone of the last e-mail, I got frustrated about
the experience of maintaining the project which is closely with Flink. My
intention was trying to remind everyone to be careful about API
compatibility and didn't really watch out for the tone I used.
Hope that doesn't hurt
26 matches
Mail list logo