`default.production.exception.handler` is the correct parameter.

It should not make any difference if you pass in a plain String or
`StreamConfig#DEFAULT_PRODUCTION_EXCEPTION_HANDLER_CLASS_CONFIG` -- the
variable is also just a String...

When you start the application, it should the config in the logs. Can
you double check if it did pick up the handler there?

-Matthias

On 6/24/18 6:42 PM, Michael Eugene wrote:
> The thing about that is, when I try to register the handler, it doesn’t work. 
>  It’s easy to register the deserialization handler because there is a static 
> final constant variable I can pass. But when I pass the string 
> "default.production.exception.handler” it doesn’t work. (That actually might 
> not be the exact string but I did get the string from the source code on 
> GitHub.) Has anyone actually used this?
> 
> Sent from my iPhone
> 
>> On Jun 24, 2018, at 8:03 PM, Matthias J. Sax <matth...@confluent.io> wrote:
>>
>> Michael,
>>
>> It depends on the semantics you want to get. About retries in general,
>> as long as a producer retries internally, you would not even notice.
>> Only after retries are exhausted, an exception is thrown.
>>
>> Kafka Streams allows you to implement a handler for this (cf
>> https://kafka.apache.org/11/documentation/streams/developer-guide/config-streams.html#default-production-exception-handler)
>> that allows you to react to the error as you wish.
>>
>> You can either use a provided handler or implement a custom one. You can
>> either skip over a record or let Kafka Streams stop processing.
>>
>> It might make sense to write the record to a "retry topic", but it
>> depends on the error. If the whole cluster is down, of course you cannot
>> write to the retry topic either. If the output topic is currently
>> under-repliated and does not allow for new write, the "retry topic"
>> might be available thought.
>>
>>
>> For exactly-once, producer retries are set to MAX_VALUE and thus the
>> application would re-try practically forever.
>>
>>
>> -Matthias
>>
>>> On 6/16/18 1:52 PM, Michael Eugene wrote:
>>> Hi I am trying to understand when to retry sending messages to topics and 
>>> when to start trying to send to "retry" topics.  The scenario is basically
>>>
>>> 1. A KafkaStreams application is consuming from a topic and sending to a 
>>> topic.  The "retries" is set at the default of 10.
>>>
>>> 2a. After 10 retries, does it make sense to then try to publish to another 
>>> "retry topic"?
>>>  2a1. What mechanism is there to know its the 10th retry, and to then start 
>>> sending to a "retry topic" after the 10th?
>>>
>>> 2b. Or after 10 retries - for that message if its not successful its just 
>>> done. Since there is no real difference between sending to a "retry topic" 
>>> and sending to a non-retry topic, why not just set retries to a high level 
>>> (like 100).
>>>
>>> 3. On an implementation level (Ive read the kafka docs, i find it a bit 
>>> high level) can someone throw a nugget out there about how exactly-once 
>>> semantics would erase the need for an "retry topic"?
>>>
>>> If u have time to answer any part of the above question, thank you in 
>>> advance.
>>>
>>>
>>>
>>>
>>>
>>> Get Outlook for Android<https://aka.ms/ghei36>
>>>
>>>
>>

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to