Hi Averell,
Hadoop could directly support S3AFileSystem. When you deploy a Flink job
on YARN, the hadoop classpath will be added to JobManager/TaskManager
automatically. That means you could use "s3a" schema without putting
"flink-s3-fs-hadoop.jar" in the plugin directory.
In K8s deployment, we d
Hi David, Yang,
Thanks. But I just tried to submit the same job on a YARN cluster using that
same uberjar, and it was successful. I don't have flink-s3-fs-hadoop.jar
anywhere in the lib or plugin folder.
Thanks and regards,
Averell
--
Sent from: http://apache-flink-user-mailing-list-archive.23
Hi Averell,
I think David's answer is right. The user uber jar will be loaded lazily by
user classloader.
So it cannot be recognized by Flink system class. You need to put it
directly /opt/flink/lib
directory or loaded via plugin mechanism.
Best,
Yang
David Magalhães 于2020年4月25日周六 上午12:05写道:
>
I think the classloaders for the uberjar and the link are different. Not
sure if this is the right explanation, but that is why you need to
add flink-s3-fs-hadoop inside the plugin folder in the cluster.
On Fri, Apr 24, 2020 at 4:07 PM Averell wrote:
> Thank you Yun Tang.
> Building my own docke
Thank you Yun Tang.
Building my own docker image as suggested solved my problem.
However, I don't understand why I need that while I already had that
s3-hadoop jar included in my uber jar?
Thanks.
Regards,
Averell
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.co
Hi Averell
Please build your own flink docker with S3 plugin as official doc said [1]
[1]
https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/docker.html#using-plugins
Best
Yun Tang
From: Averell
Sent: Thursday, April 23, 2020 20:58
To: user@f