Hi Amol,

I think If you set the parallelism of the source node equal to the number of 
the partition of the kafka topic, you could have per kafka customer per 
partition in your job. But if the number of the partitions of the kafka is 
dynamic, the 1:1 relationship might break. I think maybe @Gordon(CC) could give 
you more useful information.


Best, Sihua






On 06/25/2018 17:19,Amol S - iProgrammer<am...@iprogrammer.com> wrote:
Same kind of question I have asked on stack overflow also.

Please answer it ASAP

https://stackoverflow.com/questions/51020018/partition-specific-flink-kafka-consumer

-----------------------------------------------
*Amol Suryawanshi*
Java Developer
am...@iprogrammer.com


*iProgrammer Solutions Pvt. Ltd.*



*Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
www.iprogrammer.com <sac...@iprogrammer.com>
------------------------------------------------

On Mon, Jun 25, 2018 at 2:09 PM, Amol S - iProgrammer <am...@iprogrammer.com
wrote:

Hello,

I wrote an streaming programme using kafka and flink to stream mongodb
oplog. I need to maintain an order of streaming within different kafka
partitions. As global ordering of records not possible throughout all
partitions I need N consumers for N different partitions. Is it possible to
consume data from N different partitions and N flink kafka consumers?

Please suggest.

-----------------------------------------------
*Amol Suryawanshi*
Java Developer
am...@iprogrammer.com


*iProgrammer Solutions Pvt. Ltd.*



*Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
www.iprogrammer.com <sac...@iprogrammer.com>
------------------------------------------------

Reply via email to