Kevin Tseng created FLINK-31951: ----------------------------------- Summary: Mix schema record source creates corrupt record Key: FLINK-31951 URL: https://issues.apache.org/jira/browse/FLINK-31951 Project: Flink Issue Type: Bug Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) Affects Versions: 1.17.0, 1.18.0 Reporter: Kevin Tseng
This seems to be an unexpected side effect with how AvroDeserializationSchema class was written. Sometimes we do not have control over what record comes through a Kafka Topic. In current implementation, if AvroDeserializationSchema encountered a record byte array that does not conform to the specified Schema / SpecificRecord type, it will cause future record to be deserialized incorrectly. Origin of the issue is with how {code:java} AvroDeserializationSchema.deserialize{code} handles exception, and how {code:java} AvroDeserializationSchema.checkAvroInitialized{code} handles initialization of Decoder object -- This message was sent by Atlassian Jira (v8.20.10#820010)