finally, I work out how to build a custom flink image, the Dockerfile just
as:
>
> FROM flink:1.13.1-scala_2.11
> ADD ./flink-s3-fs-hadoop-1.13.1.jar /opt/flink/plugins
> ADD ./flink-s3-fs-presto-1.13.1.jar /opt/flink/plugins
>
the wrong Docker file is :
> FROM apache/flink:1.13.1-scala_2.11
It seems I set a wrong high-availability.storageDir,
s3://flink-test/recovery can work, but s3:///flink-test/recovery can not,
one / be removed.
Joshua Fan 于2021年8月5日周四 上午10:43写道:
> Hi Robert, Tobias
>
> I have tried many ways to build and validate the image.
>
> 1.put the s3 dependency to plu
Hi Robert, Tobias
I have tried many ways to build and validate the image.
1.put the s3 dependency to plugin subdirectory, the Dockerfile content is
below:
> FROM apache/flink:1.13.1-scala_2.11
> ADD ./flink-s3-fs-hadoop-1.13.1.jar
> /opt/flink/plugins/s3-hadoop/flink-s3-fs-hadoop-1.13.1.jar
> AD
Hey Joshua,
Can you first validate if the docker image you've built is valid by running
it locally on your machine?
I would recommend putting the s3 filesystem files into the plugins [1]
directory to avoid classloading issues.
Also, you don't need to build custom images if you want to use build-i
Hi All
I want to build a custom flink image to run on k8s, below is my Dockerfile
content:
> FROM apache/flink:1.13.1-scala_2.11
> ADD ./flink-s3-fs-hadoop-1.13.1.jar /opt/flink/lib
> ADD ./flink-s3-fs-presto-1.13.1.jar /opt/flink/lib
>
I just put the s3 fs dependency to the {flink home}/lib, and