Hi Thomas,
thanks for the feedback and clarification. The Flink community should then
make it clear that downstream developers should keep multiple
implementations for different Flink versions if it is necessary, which is
also a valid concept, so we could focus on the backward compatibility.
Hopef
Hi Jing,
AFAIK most of the pain is caused by lack of backward compatibility (binary).
And to make sure I'm not adding to the confusion: It would be
necessary to be able to run the iceberg connector built against Flink
1.12 with a Flink 1.13 distribution. That would solve most problems
downstream
Hi Piotrek,
thanks for asking. To be honest, I hope it could be good enough if Flink
could only provide backward compatibility, which is easier than providing
forward compatibility described in the proposal. That is also one of the
reasons why I started this discussion. If, after the discussion, t
Hi Jink,
I haven't yet fully reviewed the FLIP document, but I wanted to clarify
something.
> Flink Forward Compatibility
> Based on the previous clarification, Flink forward compatibility should
mean that Flink jobs or ecosystems like external connectors/formats built
with newer
> Flink version
Hi everyone,
with great interest I have read all discussions [1][2][3] w.r.t. the (API?)
compatibility issues. The feedback coming from the Flink user's point of
view is very valuable. Many thanks for it. In these discussions, there were
many explanations that talked about backward and forward com