Hi!
We are working in a use case where we have a shared Flink cluster to deploy
multiple jobs from different teams. With this strategy, we are facing a
challenge regarding the interaction with S3. Given that we already configured
S3 for the state backend (through flink-conf.yaml) every tim
sts"
val kafkaTest = "org.apache.kafka" %% "kafka" % kafkaVersion % "test"
classifier "test"
val kafkaStreamsTest = "org.apache.kafka" % "kafka-streams" % kafkaVersion %
"test" classifier "test"
val kafkaClientsTest = "org.apache.kafka" % "kafka-clients" % kafkaVersion %
"test" classifier "test"
}
-
This is the Dockerfile:
-
FROM flink:1.10.0-scala_2.12
RUN cp /opt/flink/opt/flink-metrics-prometheus-1.10.0.jar /opt/flink/lib
RUN mkdir /opt/flink/plugins/flink-s3-fs-presto
/opt/flink/plugins/flink-s3-fs-hadoop
RUN cp /opt/flink/opt/flink-s3-fs-presto-1.10.0.jar
/opt/flink/plugins/flink-s3-fs-presto/
RUN cp /opt/flink/opt/flink-s3-fs-hadoop-1.10.0.jar
/opt/flink/plugins/flink-s3-fs-hadoop/
RUN chown -R flink:flink /opt/flink/plugins/flink-s3-fs-presto/
RUN chown -R flink:flink /opt/flink/plugins/flink-s3-fs-hadoop/
-
--
Best regards,
Ricardo Cardante.