-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
`transaction.timeout.ms` is a producer setting, thus you can increase
it accordingly.
Note, that brokers bound the range via `transaction.max.timeout.ms`;
thus, you may need to increase this broker configs, too.
- -Matthias
On 8/12/19 2:43 AM, Pi
Hi,
Ok, I see. You can try to rewrite your logic (or maybe records schema by adding
some ID fields) to manually deduplicating the records after processing them
with at least once semantic. Such setup is usually simpler, with slightly
better throughput and significantly better latency (end-to-en
Hi Piotr,
Thanks a lot. I need exactly once in my use case, but instead of having the
risk of losing data, at least once is more acceptable when error occurred.
Best,
Tony Wei
Piotr Nowojski 於 2019年8月12日 週一 下午3:27寫道:
> Hi,
>
> Yes, if it’s due to transaction timeout you will lose the data.
>
>
Hi,
Yes, if it’s due to transaction timeout you will lose the data.
Whether can you fallback to at least once, that depends on Kafka, not on Flink,
since it’s the Kafka that timeouts those transactions and I don’t see in the
documentation anything that could override this [1]. You might try dis
Hi,
I had the same exception recently. I want to confirm that if it is due to
transaction timeout,
then I will lose those data. Am I right? Can I make it fall back to at
least once semantic in
this situation?
Best,
Tony Wei
Piotr Nowojski 於 2018年3月21日 週三 下午10:28寫道:
> Hi,
>
> But that’s exactly
Hi,
But that’s exactly the case: producer’s transaction timeout starts when the
external transaction starts - but FlinkKafkaProducer011 keeps an active Kafka
transaction for the whole period between checkpoints.
As I wrote in the previous message:
> in case of failure, your timeout must also b
Hi,
Please increase transaction.timeout.ms to a greater value or decrease Flink’s
checkpoint interval, I’m pretty sure the issue here is that those two values
are overlapping. I think that’s even visible on the screenshots. First
checkpoint completed started at 14:28:48 and ended at 14:30:43, w
Hi Piotr,
We have set producer's [transaction.timeout.ms] to 15 minutes and have used
the default setting for broker (15 mins).
As Flink's checkpoint interval is 15 minutes, it is not a situation where
Kafka's timeout is smaller than Flink's checkpoint interval.
As our first checkpoint just takes
Hi,
What’s your Kafka’s transaction timeout setting? Please both check Kafka
producer configuration (transaction.timeout.ms property) and Kafka broker
configuration. The most likely cause of such error message is when Kafka's
timeout is smaller then Flink’s checkpoint interval and transactions