Hi, Amol
Yes. I think it is. But, env.setParallelism(80) means that you set a global
parallelism for all operators. Actually, it depends on your job to set one of
them(operators). Instead, You just set the source operator parallelism is
enough.
Like below, It will be 80 kafka consumers [also
Thanks zhangminglei,
Does this mean setting env.setParallelism(80) means I have created 80 kafka
consumers? and if this is true then can I change env.setParallelism(80) to
any number i.e. number of partitions = env.setParallelism or else I need
to restart my job each time I set new Parallelism i
Hi, Amol
As @Sihua said. Also in my case, if the kafka partition is 80. I will also set
the job source operator parallelism to 80 as well.
Cheers
Minglei
> 在 2018年6月25日,下午5:39,sihua zhou 写道:
>
> Hi Amol,
>
> I think If you set the parallelism of the source node equal to the number of
> the p
Hi Amol,
I think If you set the parallelism of the source node equal to the number of
the partition of the kafka topic, you could have per kafka customer per
partition in your job. But if the number of the partitions of the kafka is
dynamic, the 1:1 relationship might break. I think maybe @Gor
Same kind of question I have asked on stack overflow also.
Please answer it ASAP
https://stackoverflow.com/questions/51020018/partition-specific-flink-kafka-consumer
---
*Amol Suryawanshi*
Java Developer
am...@iprogrammer.com
*iProgrammer Solutions P
Hello,
I wrote an streaming programme using kafka and flink to stream mongodb
oplog. I need to maintain an order of streaming within different kafka
partitions. As global ordering of records not possible throughout all
partitions I need N consumers for N different partitions. Is it possible to
con
Tzu-Li (Gordon) Tai created FLINK-6713:
--
Summary: Document how to allow multiple Kafka consumers /
producers to authenticate using different credentials
Key: FLINK-6713
URL: https://issues.apache.org/jira