Hi Gordon,

Gordon:If I understood you correctly, what you are doing is, while a job
with a Kafka consumer is already running, you want to start a new job also
with a Kafka consumer as the source and uses the same group.id so that the
topic's messages are routed between the two jobs.

Is this correct? If so, could you briefly explain what your use case is and
why you want to do this?
Giriraj: You got it almost correctly. The so called "new job" above is like
a new instance of the same job. We are trying to scale the job by spawning
more instances of same job. i.e. we are abstracting the flink job as a
microservice and when load increases on this service/job, we would like to
spawn a new instance of same job/service. That is why we are expecting when
group.id is same in both the jobs, messages should get delivered only to 
one of the job consumers. I have also given thought of scaling job by
increasing parallelism(canceling and starting job with increased
parallelism). But  former approach looked cleaner,  intuitive and seamless
to us. 

I would really appreciate your thoughts about it. 

Apology for the delay in response. 

--
Giriraj



--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Reply via email to