[ 
https://issues.apache.org/jira/browse/KAFKA-5158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-5158.
------------------------------------
    Resolution: Duplicate

> Options for handling exceptions during processing
> -------------------------------------------------
>
>                 Key: KAFKA-5158
>                 URL: https://issues.apache.org/jira/browse/KAFKA-5158
>             Project: Kafka
>          Issue Type: Task
>          Components: streams
>            Reporter: Eno Thereska
>            Priority: Major
>
> Imagine the app-level processing of a (non-corrupted) record fails (e.g. the 
> user attempted to do a RPC to an external system, and this call failed). How 
> can you process such failed records in a scalable way? For example, imagine 
> you need to implement a retry policy such as "retry with exponential 
> backoff". Here, you have the problem that 1. you can't really pause 
> processing a single record because this will pause the processing of the full 
> stream (bottleneck!) and 2. there is no straight-forward way to "sort" failed 
> records based on their "next retry time".



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to