and RELIABLY processed all of the data from that topic (which may
> not be true). This would effectively lead to AT_MOST_ONCE delivery
> guarantees (in other words, we are OK with loosing data), which is a
> trade-off that _in_my_opinion_ we shouldn't make here.
>
> Best,
Hi all,
Thank you for the replies, they are much appreciated.
I'm sure I'm missing something obvious here, so bear with me...
Fabian, regarding:
"Flink will try to recover from the previous checkpoint which is invalid by
now because the partition is not available anymore."
The above would happ
verer (even if it's an
opt-in capability). Would the Flink community be open to a contribution
that does this?
Best regards,
Constantinos Papadopoulos
On Tue, Sep 14, 2021 at 12:54 PM David Morávek wrote:
> Hi Constantinos,
>
> The partition discovery doesn't support topic /
We are on Flink 1.12.1, we initialize our FlinkKafkaConsumer with a topic
name *pattern*, and we have partition discovery enabled.
When our product scales up, it adds new topics. When it scales down, it
removes topics.
The problem is that the FlinkKafkaConsumer never seems to forget partitions
th
We have a multi-tenancy scenario where:
- the source will be Kafka, and a Kafka partition could contain data
from multiple tenants
- our sink will send data to a different DB instance, depending on the
tenant
Is there a way to prevent slowness in one tenant from slowing other
tenants