This was presented at Flink Forward Ber;lin 2017 - see the slide deck here
https://smarthi.github.io/flink-forward-berlin-2017-moving-beyond-moving-bytes/#/19
You should be able to leverage Confluent/Horton schema registries from
flink pipelines.
On Wed, Jan 9, 2019 at 4:14 PM Elliot West wrote:
Hello,
What is the recommended flink streaming approach for serialising a POJO to
Avro according to a schema, and pushing the subsequent byte array into a
Kafka sink? Also, is there any existing approach for prepending the schema
id to the payload (following the Confluent pattern)?
Thanks,
Ellio