The translation is done in multiple stages.
1. Parsing (syntax check)
2. Validation (semantic check)
3. Query optimization (rule and cost based)
4. Generation of physical plan, incl. code generation (DataStream program)
The final translation happens in the DataStream nodes, e.g., DataStreamCalc
[
Hi,
No problem I'm going to create a JIRA.
Regards
Thomas
2016-10-17 21:34 GMT+02:00 Theodore Vasiloudis <
theodoros.vasilou...@gmail.com>:
> That is my bad, I must have been testing against a private branch when
> writing the guide, the SVM as it stands only has a predict operation for
> Vecto
Thanks Chesnay.
I had a look at how the JMX representation looks like when I look at a Task
Manager which has one of the example Jobs deployed
(https://ci.apache.org/projects/flink/flink-docs-release-1.1/quickstart/run_example_quickstart.html)
and this looks correct.
I assume at this point that th
That is my bad, I must have been testing against a private branch when
writing the guide, the SVM as it stands only has a predict operation for
Vector not LabeledVector.
IMHO I would like to have a predict operator for LabeledVector for all
predictors (that would just call the existing Vector pred
Hi colleagues,Is there a link that described Flink Matrices & provides example
on how to utilize it pls?I really appreciate it...Cheers
From: Till Rohrmann
To: user@flink.apache.org
Cc: d...@flink.apache.org
Sent: Monday, October 17, 2016 12:52 AM
Subject: Re: Flink Metrics
Hi Govi
Hi,
I think there are some things that could be helpful for testing your algorithm.
From the top of my head, first thing is that you could try to test in a more
„unit-testing“ style, i.e. just write small drivers that inject records to your
UDFs and check if the output is as expected.
Other t
Thank you for the response.
I'm not understanding where does something like this,
/SELECT * WHERE action='denied' /
gets translated to something similar in the Flink Stream API,
/filter.(new FilterFunction() {
public boolean filter(Event event) {
Hi,
Executing the following code (see QuickStart):
val env = ExecutionEnvironment.getExecutionEnvironment
val survival = env.readCsvFile[(String, String, String,
String)]("src/main/resources/haberman.data", ",")
val survivalLV = survival
.map { tuple =>
val list = tuple.productIterator.to
Hi Fabian,
Thank you very much for the great answer and example, I appreciate it!
It is all clear now.
Best,
Yassine
2016-10-17 16:29 GMT+02:00 Fabian Hueske :
> I have to extend my answer:
>
> The behavior allowedLateness that I described applies only if the window
> trigger calls FIRE when th
Hello,
I am pretty new to Apache Flink.
I am trying to figure out how does Flink parses an Apache Calcite sql query
to its own Streaming API in order to maybe extend it, because, as far as I
know, many operations are still being developed and not currently supported
(like TUMBLE windows). I need
Hi Pedro,
The sql() method calls the Calcite parser in line 129.
Best, Fabian
2016-10-17 16:43 GMT+02:00 PedroMrChaves :
> Hello,
>
> I am pretty new to Apache Flink.
>
> I am trying to figure out how does Flink parses an Apache Calcite sql query
> to its own Streaming API in order to maybe ext
I have to extend my answer:
The behavior allowedLateness that I described applies only if the window
trigger calls FIRE when the window is evaluated (this is the default
behavior of most triggers).
In case the trigger calls FIRE_AND_PURGE, the state of the window is purged
when the function is ev
Hi Yassine,
the difference is the following:
1) The BoundedOutOfOrdernessTimestampExtractor is a built-in timestamp
extractor and watermark assigner.
A timestamp extractor tells Flink when an event happened, i.e., it extracts
a timestamp from the event. A watermark assigner tells Flink what the
c
Ok, thanks for the update!
Let me know if you run into any more problems.
On Mon, 17 Oct 2016 at 14:40 wrote:
> HI Aljoscha,
>
>
>
> Thanks for the response.
>
>
>
> To answer your question, the base path did not exist. But, I think I
> found the issue. I believe I had some rogue task manager
What are the standard approaches for testing a streaming algorithm? I have been
able to come up with the below where I
1) create a data source that emits events in bunches with set times so that I
know the events will be in the same window,
2) end the stream with a mapWithState where the state ch
HI Aljoscha,
Thanks for the response.
To answer your question, the base path did not exist. But, I think I found the
issue. I believe I had some rogue task managers running. As a troubleshooting
step, I attempted to restart my cluster. However, after shutting down the
cluster I noticed tha
Happy to hear it!
On Mon, Oct 17, 2016 at 9:31 AM, Yassine MARZOUGUI <
y.marzou...@mindlytix.com> wrote:
> That solved my problem, Thank you!
>
> Best,
> Yassine
>
> 2016-10-16 19:18 GMT+02:00 Stephan Ewen :
>
>> Hi!
>>
>> Looks to me that this is the following problem: The Decompression Streams
No problem, it is great that you have found the solution.
On Mon, Oct 17, 2016 at 12:16 PM, 侯林蔚 wrote:
> More information.
> my code are like this:
> [image: 内嵌图片 1]
>
> and I find I can name a sink by change code like this :
>
> [image: 内嵌图片 2]
>
> sorry for my reckless behavior.
>
> 2016-10-
More information.
my code are like this:
[image: 内嵌图片 1]
and I find I can name a sink by change code like this :
[image: 内嵌图片 2]
sorry for my reckless behavior.
2016-10-17 17:48 GMT+08:00 侯林蔚 :
> hi
>I make a flink topology and run it on my dev-cluster.
>but I find something on the p
Hi,
I'm a bit confused about how Flink deals with late elements after the
introduction of allowedlateness to windows. What is the difference between
using a BoundedOutOfOrdernessTimestampExtractor(Time.seconds(X)) and
allowedlateness(Time.seconds(X))? What if one is used and the other is not?
and
hi
I make a flink topology and run it on my dev-cluster.
but I find something on the picture as follow:
[image: 内嵌图片 1]
all my sinks are unnamed , is there any method to name a sink?
thank you very much.
Hello,
we could also offer a small utility method that creates 3 flink meters,
each reporting one rate of a DW meter.
Timers weren't added yet since, as Till said, no one requested them yet
and we haven't found a proper internal use-case for them
Regards,
Chesnay
On 17.10.2016 09:52, Till
Hi,
looks like there is no Flink jar in the classpath with which you run your
program. You need to make sure that they relevant jars are there or else your
program cannot find Flink’s classes, leading to a ClassNotFoundException.
Best,
Stefan
> Am 16.10.2016 um 19:26 schrieb Kaepke, Marc :
>
Hi Govind,
I think the DropwizardMeterWrapper implementation is just a reference
implementation where it was decided to report the minute rate. You can
define your own meter class which allows to configure the rate interval
accordingly.
Concerning Timers, I think nobody requested this metric so f
That solved my problem, Thank you!
Best,
Yassine
2016-10-16 19:18 GMT+02:00 Stephan Ewen :
> Hi!
>
> Looks to me that this is the following problem: The Decompression Streams
> did not properly forward the "close()" calls.
>
> It is in the lastest 1.2-SNAPSHOT, but did not make it into version 1
25 matches
Mail list logo