Hi,
As Yang Wang pointed out, you should use the new plugins mechanism.
If it doesn’t work, first make sure that you are shipping/distributing the
plugins jars correctly - the correct plugins directory structure both on the
client machine. Next make sure that the cluster has the same correct se
You could have a try the new plugin mechanism.
Create a new directory named "myhdfs" under $FLINK_HOME/plugins, and then
put your filesystem related jars in it.
Different plugins will be loaded by separate classloader to avoid conflict.
Best,
Yang
vino yang 于2019年12月18日周三 下午6:46写道:
> Hi ouywl,
Hi ouywl,
*>>Thread.currentThread().getContextClassLoader();*
What does this statement mean in your program?
In addition, can you share your implementation of the customized file
system plugin and the related exception?
Best,
Vino
ouywl 于2019年12月18日周三 下午4:59写道:
> Hi all,
> We have im
Hi all, We have implemented a filesystem plugin for sink data to hdfs1, and the yarn for flink running is used hdfs2. So when the job running, the jobmanager use the conf of hdfs1 to create filesystem, the filesystem plugin is conflict with flink component. We im