greyp9 commented on code in PR #10366:
URL: https://github.com/apache/nifi/pull/10366#discussion_r2399274210


##########
nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/WriteAvroResultWithSchema.java:
##########
@@ -58,7 +58,11 @@ public void flush() throws IOException {
     @Override
     public Map<String, String> writeRecord(final Record record) throws 
IOException {
         final GenericRecord rec = AvroTypeUtil.createAvroRecord(record, 
schema);
-        dataFileWriter.append(rec);
+        try {
+            dataFileWriter.append(rec);
+        } catch (final DataFileWriter.AppendWriteException e) {
+            throw new IOException(e);

Review Comment:
   That's reasonable; thanks.
   
   I'm not familiar with the reason for the "catch all" in 
`AbstractRecordStreamKafkaMessageConverter`.
   
   To me, the problem seems to be that the Avro writer implementation throws a 
particular exception (class) that is not visible in the classpath of the Kafka 
implementation.  So we can't act based on that particular exception.
   
   Another variation would be for AvroWriter to throw 
`MalformedRecordException` instead of `IOException`, as that better conveys the 
particular problem (bad data).
   
   There are potential side effects to either of these potential paths forward; 
hopefully others in the community will chime in.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to