Hi,
My case is very similar to what is described in this link of Spark:
http://spark.apache.org/docs/latest/sql-programming-guide.html#programmatically-specifying-the-schema
I hope this clarifies it.
Thanks,
Luqman
On Mon, Feb 13, 2017 at 12:04 PM, Tzu-Li (Gordon) Tai
wrote:
> Hi Luqman,
>
>
Hi Luqman,
From your description, it seems like that you want to infer the type (case
class, tuple, etc.) of a stream dynamically at runtime.
AFAIK, I don’t think this is supported in Flink. You’re required to have
defined types for your DataStreams.
Could you also provide an example code of wh
Hi,
I hope everyone is doing well.
I have a use case where we infer schema according to file headers and other
information. Now, in Flink, we can specify schema of a stream with case
classes and tuples. With tuples, we cannot give names to fields, but we
will have to generate case classes on the