Hi Chen,
with Flink 1.2, conditions in select clauses are supported in the Table API
with the following syntax:
table.select('name, 'age, ('yearOfBirth < 1997) ? ("adult", "teenager") )
See also the documentation of built-in functions [1].
Best, Fabian
[1]
https://ci.apache.org/projects/flink/
Hi there,
Is there any jira related to conditional branching support in table api?
https://calcite.apache.org/docs/reference.html#conditional-functions-and-operators
Thanks
Chen
Hi,
I understand the logic and indeed considering the " batch and stream query
equality " it makes the version you have proposed (with the materialized view
for inputstream2.
You also mentioned there might be some queries that will never require to
update previously emitted results such as qu
The Dataworks & Hadoop summit will be in San Jose June 13-15, 2017. The call
for abstracts closes February 10. You can submit an abstract at
http://tinyurl.com/dwsj17CFA
There are tracks for Hadoop, data processing and warehousing, governance and
security, IoT and streaming, cloud and operati
Greg Hogan created FLINK-5694:
-
Summary: Collect DataSetAnalytic
Key: FLINK-5694
URL: https://issues.apache.org/jira/browse/FLINK-5694
Project: Flink
Issue Type: Improvement
Components:
Greg Hogan created FLINK-5693:
-
Summary: ChecksumHashCode DataSetAnalytic
Key: FLINK-5693
URL: https://issues.apache.org/jira/browse/FLINK-5693
Project: Flink
Issue Type: Improvement
Co
+1 Looks great
2017-01-31 17:53 GMT+01:00 Till Rohrmann :
> +1
>
> - Built Flink with Hadoop 2.7.1
> - Tested SBT quickstarts
> - Ran Flink on Mesos (standalone and HA mode) executing the WindowJoin
> example
> - Ran example job using the RemoteEnvironment
>
> On Tue, Jan 31, 2017 at 3:06 PM, Rob
+1
- Built Flink with Hadoop 2.7.1
- Tested SBT quickstarts
- Ran Flink on Mesos (standalone and HA mode) executing the WindowJoin
example
- Ran example job using the RemoteEnvironment
On Tue, Jan 31, 2017 at 3:06 PM, Robert Metzger wrote:
> +1
>
> - Downloaded the hadoop26, scala 2.10 build an
Hi,
If the goal is that the materialized result of a streaming query should be
equivalent to the result of a batch query on the materialized input, we
need to update previously emitted data.
Only appending to the already emitted results will not work in most of the
cases.
In case of the join quer
+1
- Downloaded the hadoop26, scala 2.10 build and ran it overnight on a CDH
5.9.0 cluster using YARN (with HA), HDFS (with HA) and all services with
Kerberos
- Build a streaming job using the artifacts from the staging repository
- ran a job which is specifically designed to be misbehaved. I didn
Hi,
I was thinking about this reply...
I am not sure I understand exactly the idea why would you need to keep the
whole state for Option 2. From my point of view this is not needed (and I see
this as the easy case). The main reason is that you have the SINGLE_VALUE
operator which would imply th
Hello, one last note on this thread: we've processed and published the
Flink user survey results, and you can find a file with graphs summarizing
multiple-choice responses as well as anonymous feedback from open-ended
questions in a GitHub repository [1]. We also published a summary of
responses on
Hi,
For someone who likes to program more stream-like, I must say I like the syntax
that you proposed. So I would be fine to keep it this way.
My only question/concerned is if someone who does SQL as a day to day job would
like this way to write queries in which we port at least time concepts f
13 matches
Mail list logo