Yes, the Scala consumers have been removed in 2.0.0, which simplifies some of this. The following commit was an initial step in unifying the exception handling:
https://github.com/apache/kafka/commit/96bcfdfc7c9aac075635b2034e65e412a725672e But more can be done as you mentioned. Ismael On 5 Jul 2018 9:36 am, "Stanislav Kozlovski" <stanis...@confluent.io> wrote: Hey Ismael, It is only slightly related - my PR would attach two new attributes and also touch upon deserialization exceptions. But this PR did provide me with some insight: Maybe the best approach would be to make `InvalidRecordException` a public exception instead of introducing a new one - I did not realize it was not publicly exposed. Does the following: InvalidMessageException extends CorruptRecordException for temporary compatibility with the old Scala clients. * We want to update the server side code to use and catch the new CorruptRecordException. * Because ByteBufferMessageSet.scala and Message.scala are used in both server and client code having * InvalidMessageException extend CorruptRecordException allows us to change server code without affecting the client. still apply? I can see that the `ByteBufferMessageSet` and `Message` scala classes are not present in the codebase anymore. AFAIA the old scala clients were removed with 2.0.0 and we can thus update the server side code to use the `CorruptRecordException` while changing and exposing `InvalidRecordException` to the public. WDYT? I will also make sure to not expose the cause of the exception when not needed, maybe I'll outright remove the `cause` attribute On Thu, Jul 5, 2018 at 4:55 PM Ismael Juma <ism...@juma.me.uk> wrote: > Thanks for the KIP, Stanislav. The following PR looks related: > > https://github.com/apache/kafka/pull/4093/files > > Ismael > > On Thu, Jul 5, 2018 at 8:44 AM Stanislav Kozlovski <stanis...@confluent.io > > > wrote: > > > Hey everybody, > > > > I just created a new KIP about exposing more information in exceptions > > caused by consumer record deserialization/validation. Please have a look > at > > it, it is a very short page. > > > > I am working under the assumption that all invalid record or > > deserialization exceptions in the consumer pass through the `Fetcher` > > class. Please confirm if that is true, otherwise I might miss some places > > where the exceptions are raised in my implementation > > > > One concern I have is the name of the second exception - > > `InoperativeRecordException`. I would have named it > > `InvalidRecordException` but that is taken. The `Fetcher` class catches > > `InvalidRecordException` (here > > < > > > https://github.com/apache/kafka/blob/c5b00d20d3703b7fc4358b7258d5d6adb890136f/clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java#L1081 > > > > > and here > > < > > > https://github.com/apache/kafka/blob/c5b00d20d3703b7fc4358b7258d5d6adb890136f/clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java#L1092 > > >) > > and re-raises it as `KafkaException`, which exposes it as a non-retriable > > exception to the user (`InvalidRecordException` extends > > `RetriableExecption`, but `KafkaException` doesn't). > > A suggestion I got for an alternative name was > > `InvalidFetchRecordException`. Please chime in if you have ideas > > > > Confluence page: KIP-334 > > < > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=87297793 > > > > > JIRA Issue: KAFKA-5682 <https://issues.apache.org/jira/browse/KAFKA-5682 > > > > -- > > Best, > > Stanislav > > > -- Best, Stanislav