Our data's schema is defined by our users and is not known at compile time.

All data arrives in via a single Kafka topic and is serialized using the
same serialization tech (to be defined). 

We want to use King.com's RBEA technique to process this data in different
ways at runtime (depending on its schema), using a single topology/DAG.

Therefore, each message passing through the DAG will have a different
schema.

---

My question is, what's the best way to implement a system like this, where
each message may have a different schema, and none of the schemas are known
at compile time, but must use the same DAG?

I've tried using an 'array of heterogenous tuples' which appears to work
fine when playing around in the IDE, but before I continue too far down that
route, I just wanted to verify if there were any known methods for doing
this?

Thanks!
Lawrence



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Are-heterogeneous-DataStreams-possible-tp10852.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.

Reply via email to