Interact with different S3 buckets from a shared Flink cluster

2020-06-17 Thread Ricardo Cardante
Hi! We are working in a use case where we have a shared Flink cluster to deploy multiple jobs from different teams. With this strategy, we are facing a challenge regarding the interaction with S3. Given that we already configured S3 for the state backend (through flink-conf.yaml) every tim

Flink 1.10 - Hadoop libraries integration with plugins and class loading

2020-02-26 Thread Ricardo Cardante
sts" val kafkaTest = "org.apache.kafka" %% "kafka" % kafkaVersion % "test" classifier "test" val kafkaStreamsTest = "org.apache.kafka" % "kafka-streams" % kafkaVersion % "test" classifier "test" val kafkaClientsTest = "org.apache.kafka" % "kafka-clients" % kafkaVersion % "test" classifier "test" } - This is the Dockerfile: - FROM flink:1.10.0-scala_2.12 RUN cp /opt/flink/opt/flink-metrics-prometheus-1.10.0.jar /opt/flink/lib RUN mkdir /opt/flink/plugins/flink-s3-fs-presto /opt/flink/plugins/flink-s3-fs-hadoop RUN cp /opt/flink/opt/flink-s3-fs-presto-1.10.0.jar /opt/flink/plugins/flink-s3-fs-presto/ RUN cp /opt/flink/opt/flink-s3-fs-hadoop-1.10.0.jar /opt/flink/plugins/flink-s3-fs-hadoop/ RUN chown -R flink:flink /opt/flink/plugins/flink-s3-fs-presto/ RUN chown -R flink:flink /opt/flink/plugins/flink-s3-fs-hadoop/ - -- Best regards, Ricardo Cardante.