Pushkar,
You are not wrong. Indeed whatever deserialization errors that happens
during the poll() method will cause your code to be interrupted without
much information about which offset failed. A workaround would be trying
to parse the message contained in the exception SerializationExceptio
Hi Ricardo,
Probably this is more complicated than that since the exception has
occurred during Consumer.poll itself, so there is no ConsumerRecord for the
application to process and hence the application doesn't know the offset of
record where the poll has failed.
On Thu, Jun 18, 2020 at 7:03 PM
Pushkar,
Kafka uses the concept of offsets to identify the order of each record
within the log. But this concept is more powerful than it looks like.
Committed offsets are also used to keep track of which records has been
successfully read and which ones are not. When you commit a offset in
t
Hi Gerbrand,
thanks for the update, however if i dig more into it, the issue is because
of schema registry issue and the schema registry not accessible. So the
error is coming during poll operation itself:
So this is a not a bad event really but the event can't be deserialized
itself due to schema
Hello Pushkar,
I'd split records/events in categories based on the error:
- Events that can be parsed or otherwise handled correctly, e.g. good events
- Fatal error, like parsing error, empty or incorrect values, etc., e.g. bad
events
- Non-fatal, like database-connection failure, io-failure, ou
Hi All,
This is what I am observing: we have a consumer which polls data from
topic, does the processing, again polls data which keeps happening
continuously.
At one time, there was some bad data on the topic which could not be
consumed by consumer, probably because it couldn't deserialize the eve