Hi Leonard,
Yes that would be one solution. But why is it necessary to create a
temporaryView from already created table ?
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Does anybody know how to set the name for the table created using
fromDataStream() method ? Flink's documentation doesn't mention anything
about this and when I went through the taskManager logs I saw some auto
generated name like 'Unregistered_DataStream_5'.
Here's my code :
/StreamTableEnvironmen
Hi,
In that case what's the difference between reluctant quantifier like (B*?)
in SQL and relaxed contiguity in CEP ?
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Thanks Dawid. As I see in the code the buffered storage in between watermarks
is stored in /MapState> elementQueueState /variable in
/class CepOperator/. My question is, if we use rocksDb or some other state
backend then would this state be stored on that and checkpointed ? or is it
always in the h
/"TableResult result1 = stmtSet.execute();
result1.print();"/
I tried this, and the result is following :
Job has been submitted with JobID 4803aa5edc31b3ddc884f922008c5c03
+++
| default_catalog.default_databas
We are evaluating a use-case where there will be 100s of events stream coming
in per second and we want to run some fixed set of pattern matching rules on
them And I use relaxed contiguity rules as described in the documentation.
for example :
/a pattern sequence "a b+ c" on the stream of "a", "b1"
There's 3 different types of Contiguity defined in the CEP documentation [1]
looping + non-looping -- Strict, relaxed and non deterministic relaxed.
There's no equivalent in the SQL documentation [2]. Can someone shed some
light on what's achievable in SQL and what isn't ?
Related question : It se
I have tried that too For example :
/tableEnv.createTemporaryView("CreditDetails", creditStream);
tableEnv.executeSql(
"CREATE TABLE output(loanId VARCHAR) with ('connector.type' = 'filesystem',"
+ "'connector.path' = 'file:///path/Downloads/1',"
+ "'format.type' = 'csv')");
Table creditD
If I want to run two different select queries on a flink table created from
the dataStream, the blink-planner runs them as two different jobs. Is there
a way to combine them and run as a single job ? Example code :
/StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironmen