Ah, good point.

Should we consider altering the serializer interface to permit not sending
the record?

On Wed, Dec 5, 2018 at 9:23 PM Kamal Chandraprakash <
kamal.chandraprak...@gmail.com> wrote:

> Matt,
>
>     That's a good point. If these cases are handled in the serializer, then
> one cannot continue the stream processing by skipping the record.
> To continue, you may have to send a empty record serialized key/value (new
> byte[0]) to the downstream on hitting the error which may cause un-intended
> results.
>
>
>
>
>
> On Wed, Dec 5, 2018 at 8:41 PM Matt Farmer <m...@frmr.me> wrote:
>
> > Hi there,
> >
> > Thanks for this KIP.
> >
> > What’s the thinking behind doing this in ProductionExceptionHandler
> versus
> > handling these cases in your serializer implementation?
> >
> > On Mon, Dec 3, 2018 at 1:09 AM Kamal Chandraprakash <
> > kamal.chandraprak...@gmail.com> wrote:
> >
> > > Hello dev,
> > >
> > >   I hope to initiate the discussion for KIP-399: Extend
> > > ProductionExceptionHandler to cover serialization exceptions.
> > >
> > > KIP: <
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-399%3A+Extend+ProductionExceptionHandler+to+cover+serialization+exceptions
> > > >
> > > JIRA: https://issues.apache.org/jira/browse/KAFKA-7499
> > >
> > > All feedbacks will be highly appreciated.
> > >
> > > Thanks,
> > > Kamal Chandraprakash
> > >
> >
>

Reply via email to