Few thoughts from my side:
(1) The client needs big refactoring / cleanup. It should use a proper HTTP
client library to help with future authentication mechanisms.
Once that is done, we should identify a "client API" that we make stable,
just as the DataStream / DataSet API.
(2) We will most lik
so let's take a look...
binary client compatibility: The key issue i see hasn't changed since
the last time this was brought up: Clients rely on the JobGraph to
submit the job which is an internal data structure. AFAIK there will
also be changes made to said class soon(ish). So long as we don'
Some scenarios that come to mind:
Flink client binary compatibility with remote cluster: This would include
RemoteStreamEnvironment, RESTClusterClient etc. - User should be able to
submit the job built with 1.6.x using the 1.6.x binaries to the remote
Flink 1.7.x or later cluster. The use case for
Hi,
I think this is a very good discussion to have.
Flink is becoming part of more and more production deployments and more
tools are built around it.
The question is do we want to (or can we) make parts of the
control/maintenance/monitoring API stable such that external
systems/frameworks can rel
I think this discussion needs specific examples as to what should be
possible as it otherwise is to vague / open to interpretation.
For example, "job submission" may refer to CLI invocations continuing to
work (i.e. CLI arguments), or being able to use a 1.6 client against a
1.7 cluster, which
Hi,
I wanted to bring back the topic of backward compatibility with respect to
all/most of the user facing aspects of Flink. Please note that isn't
limited to the programming API, but also includes job submission and
management.
As can be seen in [1], changes in these areas cause difficulties
dow