Thanks for the feedback.

> May I ask why you have less partitions than the parallelism? I would be
happy to learn more about your use-case to better understand the
> motivation.

The use case is that topic A, contains just a few messages with product
metadata that rarely gets updated, while topic B contains user interactions
with the products (and many more messages). For topic A we thought that one
partition will be sufficient to keep the metadata, while we have 32
partitions for topic B. Due to the load on topic B, we're use a parallelism
of 2-8.

Thanks,
Lars



On Thu, Sep 16, 2021 at 9:09 AM Fabian Paul <fabianp...@ververica.com>
wrote:

> Hi all,
>
> The problem you are seeing Lars is somewhat intended behaviour,
> unfortunately. With the batch/stream unification every Kafka partition is
> treated
> as kind of workload assignment. If one subtask receives a signal that
> there is no workload anymore it goes into the FINISHED state.
> As already pointed this restriction will lift in the near future.
>
> I went through the code and I think in your case you can configure the
> following configuration [1] which should show an equal behaviour than the
> old source. This will prevent the enumerator from sending a final signal
> to the subtasks and they will not go into finished state.
>
> May I ask why you have less partitions than the parallelism? I would be
> happy to learn more about your use-case to better understand the
> motivation.
>
> Best,
> Fabian
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/datastream/kafka/#dynamic-partition-discovery

Reply via email to