Hi Jack,
if your records are already partitioned wrt the individual topics and you
don't need to compute some global values, then you could create for every
topic a separate Flink pipeline (separate FlinkKafkaConsumer) which runs
independently. That way if one of the APIs degrades it will automati
Hi Till,
Till: Could you please give me a bit more context? Are you asking how Flink
realizes exactly once processing guarantees with its connectors?
Thank you very much for your response! Flink has a lot of really cool ideas
:)
I did read more about connectors and I think I can elaborate. The prob
Hi Jack,
I do not fully understand what you want to achieve here. Could you please
give me a bit more context? Are you asking how Flink realizes exactly once
processing guarantees with its connectors?
Cheers,
Till
On Fri, Jul 31, 2020 at 8:56 PM Jack Phelan
wrote:
> Scenario
> ===
>
> A pa
Scenario
===
A partition that Flink is reading:
[ 1 - 2 - 3 - 4 - 5 - 6 - 7 - | 8 _ 9 _ 10 _ 11 | 12 ~ 13 ]
[. Committed. | In flight | unread ]
Kafka basically breaks off pieces of the end of the queue and shoves them
downstream for processing?
So suppose whil