Thank Piotr for driving this plugin mechanism.  Pluggability is pretty
important for the ecosystem of flink.

Piotr Nowojski <pi...@ververica.com> 于2019年4月10日周三 下午5:48写道:

> Hi Flink developers,
>
> I would like to introduce a new plugin loading mechanism that we are
> working on right now [1]. The idea is quite simple: isolate services in
> separate independent class loaders, so that classes and dependencies do not
> leak between them and/or Flink runtime itself. Currently we have quite some
> problems with dependency convergence in multiple places. Some of them we
> are solving by shading (built in file systems, metrics), some we are
> forcing users to deal with them (custom file systems/metrics) and others we
> do not solve (connectors - we do not support using different Kafka versions
> in the same job/SQL). With proper plugins, that are loaded in independent
> class loaders, those issues could be solved in a generic way.
>
> Current scope of implementation targets only file systems, without a
> centralised Plugin architecture and with Plugins that are only “statically”
> initialised at the TaskManager and JobManager start up. More or less we are
> just replacing the way how FileSystem’s implementations are discovered &
> loaded.
>
> In the future this idea could be extended to different modules, like
> metric reporters, connectors, functions/data types (especially in SQL),
> state backends, internal storage or other future efforts. Some of those
> would be easier than others: the metric reporters would require some
> smaller refactor, while connectors would require some bigger API design
> discussions, which I would like to avoid at the moment. Nevertheless I
> wanted to reach out with this idea so if some other potential use cases pop
> up in the future, more people will be aware.
>
> Piotr Nowojski
>
>
> [1] https://issues.apache.org/jira/browse/FLINK-11952 <
> https://issues.apache.org/jira/browse/FLINK-11952>



-- 
Best Regards

Jeff Zhang

Reply via email to