Hi Pilgrim,

Currently table indeed could not using low level api like timer, would a 
mixture of sql & datastream
could satisfy the requirements? A job might be created via multiple sqls, and 
connected via datastream
operations.

Best,
 Yun

------------------------------------------------------------------
Sender:Pilgrim Beart<pilgrim.be...@devicepilot.com>
Date:2021/02/09 02:22:46
Recipient:<user@flink.apache.org>
Theme:Any plans to make Flink configurable with pure data?

To a naive Flink newcomer (me) it's a little surprising that there is no pure 
"data" mechanism for specifying a Flink pipeline, only "code" interfaces. With 
the DataStream interface I can use Java, Scala or Python to set up a pipeline 
and then execute it - but that doesn't really seem to need a programming model, 
it seems like configuration, which could be done with data? OK, one does need 
occasionally to specify some custom code, e.g. a ProcessFunction, but for any 
given use-case, a relatively static library of such functions would seem fine.

My use case is that I have lots of customers, and I'm doing a similar job for 
each of them, so I'd prefer to have a library of common code (e.g. 
ProcessFunctions), and then specify each customer's specific requirements in a 
single config file.  To do that in Java, I'd have to do metaprogramming (to 
build various pieces of Java out of that config file).

Flink SQL seems to be the closest solution, but doesn't appear to support 
fundamental Flink concepts such as timers (?). Is there a plan to evolve Flink 
SQL to support timers? Timeouts is my specific need.

Thanks,
-Pilgrim
--
Learn more at https://devicepilot.com @devicepilot 



Reply via email to