Hi all,
In flink-avro, flink-csv and flink-json we have implementations of
SerializationSchema/DeserializationSchema for the org.apache.flink.types.Row
type. In particular, I'm referring to:

   - org.apache.flink.formats.json.JsonRowSerializationSchema
   - org.apache.flink.formats.json.JsonRowDeserializationSchema
   - org.apache.flink.formats.avro.AvroRowSerializationSchema
   - org.apache.flink.formats.avro.AvroRowDeserializationSchema
   - org.apache.flink.formats.csv.CsvRowDeserializationSchema
   - org.apache.flink.formats.csv.CsvRowSerializationSchema

These classes were used in the old table planner, but now the table planner
doesn't use the Row type internally anymore, so these classes are unused
from the flink-table packages.

Because these classes are exposed (some have @PublicEvolving annotation)
there might be some users out there using them when using the DataStream
APIs, for example to convert an input stream of JSON from Kafka to a Row
instance.

Do you have any opinions about deprecating these classes in 1.15 and then
drop them in 1.16? Or are you using them? If yes, can you describe your use
case?

Thank you,

FG

-- 

Francesco Guardiani | Software Engineer

france...@ververica.com


<https://www.ververica.com/>

Follow us @VervericaData

--

Join Flink Forward <https://flink-forward.org/> - The Apache Flink
Conference

Stream Processing | Event Driven | Real Time

--

Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany

--

Ververica GmbH

Registered at Amtsgericht Charlottenburg: HRB 158244 B

Managing Directors: Karl Anton Wehner, Holger Temme, Yip Park Tung Jason,
Jinwei (Kevin) Zhang

Reply via email to