The thing about that is, when I try to register the handler, it doesn’t work.
It’s easy to register the deserialization handler because there is a static
final constant variable I can pass. But when I pass the string
"default.production.exception.handler” it doesn’t work. (That actually might
Michael,
It depends on the semantics you want to get. About retries in general,
as long as a producer retries internally, you would not even notice.
Only after retries are exhausted, an exception is thrown.
Kafka Streams allows you to implement a handler for this (cf
https://kafka.apache.org/11/d
Jozsef,
Your question is a little unclear to me.
> To detect lost messages
For what topology?
>> KTable inputTable = builder.table("inputTopic",
>> Consumed.with(...).filter(...));
The code you show contains a `filter()` that can remove record? Could
this be the issue?
It's also unclear to m
Sam,
Thanks for your email. This is a very interesting find. I did not double
check the code but your reasoning makes sense to me. Note, that caching
was _not_ introduced to reduce the writes to RocksDB, but to reduce the
write the the changelog topic and to reduce the number of records send
downs
+1
* Ran Unit tests
* 3 node cluster . Ran simple tests.
Thanks,
Harsha
On Sat, Jun 23rd, 2018 at 9:7 AM, Ted Yu wrote:
>
>
>
> +1
>
> Checked signatures.
>
> Ran unit test suite.
>
> On Fri, Jun 22, 2018 at 4:56 PM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com > wrote:
>
> > +1 (no