Hi,
> Since we have "flink-s3-fs-hadoop" at the plugins folder and therefore being
> dynamically loaded upon task/job manager(s) startup (also, we are keeping
> Flink's default inverted class loading strategy), shouldn't Hadoop
> dependencies be loaded by parent-first? (based on
> classloader.
Hi!
We're working on a project where data is being written to S3 within a Flink
application.
Running the integration tests locally / IntelliJ (using
MiniClusterWithClientResource) all the dependencies are correctly resolved and
the program executes as expected. However, when fat JAR is submitte