Hi Zheng,

Thanks for reaching out and sharing your frustration. No feelings are hurt
and feedback is always welcome, because that's the only way we can improve
for the future. API compatibility is a really important thing for us while
also improving and building new capabilities. Let me investigate a bit what
happened on our end, share that and then try to get some learnings out of
it for the future. I'll get back to you in a couple of days.

Best regards,

Martijn Visser | Product Manager

mart...@ververica.com

<https://www.ververica.com/>


Follow us @VervericaData

--

Join Flink Forward <https://flink-forward.org/> - The Apache Flink
Conference

Stream Processing | Event Driven | Real Time


On Tue, 28 Sept 2021 at 07:39, OpenInx <open...@gmail.com> wrote:

> Sorry about my unfriendly tone of the last e-mail, I got frustrated about
> the experience of maintaining the project which is closely with Flink. My
> intention was trying to remind everyone to be careful about API
> compatibility and didn't really watch out for the tone I used.
>
> Hope that doesn't hurt anyone's feelings.
>
> On Tue, Sep 28, 2021 at 12:33 PM OpenInx <open...@gmail.com> wrote:
>
> > Hi Dev
> >
> > We are trying to upgrade the flink version from 1.12.0 to 1.13.2 in
> apache
> > iceberg project ( https://github.com/apache/iceberg/pull/3116),  but
> it's
> > not a great experience.  We expect to support both flink1.12 and
> flink1.13
> > in an iceberg-flink module without using the new API of flink1.13 for
> > saving maintenance cost,  but we find the iceberg-flink-runtime.jar built
> > by flink 1.13 cannot works fine in flink 1.12 clusters because of the
> basic
> > API compatibility was break when iterating flink 1.12 to flink1.13.2:
> >
> > (The following are copied from the iceberg issue:
> > https://github.com/apache/iceberg/issues/3187#issuecomment-928755046)
> >
> > Thanks for the report, @Reo-LEI ! I think this issue was introduced from
> > this apache flink PR (
> >
> https://github.com/apache/flink/pull/15316/files#diff-bd276ed951054125b39428ee61de103d9c7832246398f01514a574bb8e51757cR74
> )
> > and FLINK-21913 (https://issues.apache.org/jira/browse/FLINK-21913), it
> > just changed the returned data type from CatalogTable to
> > ResolvedCatalogTable without any compatibility guarantee. In this case,
> the
> > iceberg-flink-runtime jar which is compiled from apache flink 1.13 will
> > include the ResovledCatalogTable class inside it. Finally when we package
> > this jar and submit the flink job to flink 1.12, the above compatibility
> > issue happen.
> >
> > As we all know, the DynamicTableFactory (
> >
> https://github.com/apache/flink/blob/99c2a415e9eeefafacf70762b6f54070f7911ceb/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/DynamicTableFactory.java
> )
> > is a basic API which almost all flink connectors are built on top of it.
> > The breaking compatibility makes the downstream projects really hard to
> > deliver better compatibility to users, unless we iceberg maintain
> different
> > modules for each maintained flink version (That's not the thing that we
> > want to do).
> >
> > The last flink upgrading work is also not a good experience (See the
> > discussion (https://github.com/apache/iceberg/pull/1956) and comment (
> > https://github.com/apache/iceberg/pull/1956#discussion_r546534299) ),
> > because the flink 1.12 also breaks several API that was annotated
> > PublicEvolving in flink 1.11.0, that becomes one of the most important
> > reasons leading to the conclusion that stops support flink 1.11.0 in our
> > apache iceberg branch ( Supporting new features [such as flip-27 unified
> > iceberg source/sink] that depends the API introduced in flink 1.12 is
> > another reason). To better support the compatibility of downstream
> systems
> > and delivering better experience to flink users, I will strongly suggest
> > the Apache Flink community to pay more attention to ensuring API
> > compatibility.
> >
> >
> > Zheng Hu (openinx)
> >
> > Thanks.
> >
>

Reply via email to