Hi,

In our project we recently upgraded from Flink 1.9 to 1.20.

After the upgrade we saw a problem of two subtasks (in a task that reads
several Kafka topics) out of four were not producing any records. We are
reading three different topics each having two partitions. All of the
partitions are producing messages continuously. This was fixed by changing
slot count from four to two.

I am glad to have this fixed but I was left wondering why. Moreover, what
would be a reasonable "algorithm" of setting the slot count in general? Eg.
"set slot count to partition count of the topic that has the most
partitions".

PS. I requested access to Flink Slack. If you, the reader, can help me with
this, I would appreciate it.

Thanks and regards,
Lauri Mäkinen

Reply via email to