I have a few libraries for doing IO, with large amounts of data, that use
akka-http. Now. I have a Flink application, where I need to use those
libraries.

How do I use a common actor-system across my flink app? I mean, AFAIU,
flink apps will be distributed and the same pipeline could end up creating
multiple actor systems on multiple JVMs. Also, the same JVM could be shared
between multiple pipelines too, and in that case, having multiple actor
systems would mean having a lot of threads.

Am I right in assuming that this is a problem? If so, how can I avoid that
problem? More specifically, is there a way to share an actor system between
the apps on the same JVM? I know Flink has a way to get the same execution
context using the Async IO. (
https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/operators/asyncio.html#async-io-api
).

Thanks,
Bhashit

Reply via email to