How can I configure 1 Flink Job (stream execution environment, parallelism set 
to 10) to have multiple kafka sources where each has its' own sink to s3.

For example, let's say the sources are:

- Kafka Topic A - Consumer (10 partitions)
- Kafka Topic B - Consumer (10 partitions)
- Kafka Topic C - Consumer (10 partitions)

And let's say the sinks are:

- BucketingSink to S3 in bucket: s3://kafka_topic_a/<data files>
- BucketingSink to S3 in bucket: s3://kafka_topic_b/<data files>
- BucketingSink to S3 in bucket: s3://kafka_topic_c/<data files>

And between source 1 to sink 1, I would like to perform unique processing. 
Between source 2 to sink 2, it should have unique processing and between source 
3 to sink 3, it should also have unique processing.

How can this be achieved? Is there an example?

Reply via email to