Hi,
If you are expressing a job that contains three pairs of source->sinks that
are isolated from each other, then Flink supports this form of Job.
It is not much different from a single source->sink, just changed from a
DataStream to three DataStreams.
For example,
*DataStream ds1 = xxx*
*ds1.a
How can I configure 1 Flink Job (stream execution environment, parallelism set
to 10) to have multiple kafka sources where each has its' own sink to s3.
For example, let's say the sources are:
- Kafka Topic A - Consumer (10 partitions)
- Kafka Topic B - Consumer (10 partitions)
- Kafka Topic C -