I built the Flink master branch and tried running this simple Flink app that
uses a Java record:
https://github.com/kurtostfeld/flink-kryo-upgrade-demo/blob/main/flink-record-demo/src/main/java/demo/app/Main.java
It fails with the normal exception that Kryo 2.x throws when you try to
serialize
Hi Becket,
Thanks for this FLIP! Having a deprecation process is really important. I
understand some people’s concerns about the additional burden for project
maintainers, but my personal experience with Kafka has been that it’s very
liveable and that it’s well worth the benefit to users. In fa
Hi All,
The @Public -> @PublicEvolving proposed by Xintong is a great idea.
Especially, after he suggest @PublicRetired, i.e. @PublicEvolving --(2
minor release)--> @Public --> @deprecated --(1 major
release)--> @PublicRetired. It will provide a lot of flexibility without
breaking any rules we had
Hi Becket,
Thanks for the clarification. Sorry that I didn't make myself clear enough.
Let me share more thoughts.
> Deprecating an API is just a more elegant way of replacing an API with a
> new one. The only difference between the two is whether the old API is kept
> and coexists with the new
> Perhaps he could weigh in on whether the combination of automated tests
plus those smoke tests should be sufficient for testing with new Flink
versions
What we usually did at the bare minimum for new StateFun releases was the
following:
1. Build tests (including the smoke tests in the e2e mo
Alexander Fedulov created FLINK-32373:
-
Summary: Support passing headers with SQL Client gateway requests
Key: FLINK-32373
URL: https://issues.apache.org/jira/browse/FLINK-32373
Project: Flink
Samrat Deb created FLINK-32372:
--
Summary: flink-connector-aws: build on pull request /
compile_and_test doesn't support for Flink 1.16.2 and 1.17.1
Key: FLINK-32372
URL: https://issues.apache.org/jira/browse/FLINK-32