Hi,
I am using elasticsearch 5.4.3 version in my flink project(flink version
1.3.1)
Details
1. Using Maven build tool.
2. Running from intellij IDE.
3. Elasticsearch is running on the local machine.
Have added the following maven dependency
org.apache.flink
flink-connector-elasticsearc
k/flink-docs-
> release-1.3/dev/windows.html#incremental-window-aggregation-with-
> reducefunction
>
> 2017-08-04 22:43 GMT+02:00 Raj Kumar <[hidden email]
> <http:///user/SendEmail.jtp?type=node&node=14699&i=0>>:
>
>> Thanks Fabian.
>>
>> Th
Thanks Fabian. I do have one more question.
When we connect the two streams and perfrom coprocess function. There are
two separate methods for each streams. Which stream state we need to store
and Will the coprocess function automatically trigger once the other stream
data or should we set some tim
Thanks Fabian.
The incoming events have the timestamps. Once I aggregate in the first
stream to get counts and calculate the mean/standard deviation in the second
the new timestamps should be window start time ? How to tackle this issue ?
--
View this message in context:
http://apache-flink-
Thanks Fabian. Your suggestion helped. But, I am stuck at 3rd step
1. I didn't completely understand the step 3. What the process function
should look like ? Why does it needs to be stateful. Can you please provide
more details on this.
2. In the stateful, function, we need to have a value state
Thanks Fabian. That helps.
I have one more question. In the second step since I am using window
function apply, The average calculated will be a running average or it will
be computed at the end of 6hrs window ??
--
View this message in context:
http://apache-flink-user-mailing-list-archive.23
Thanks Fabian.
Can you provide more details about the implementation for step 2 and step 3.
How to calculate the average and standard deviation ?
How does the coprocess function work ? Can you provide details about these
two.
--
View this message in context:
http://apache-flink-user-mailing-
Hi,
I am using a sliding window to monitor server performance. I need to keep
track of number of HTTP requests generated and alert the user when the
requests gets too high(Sliding window of 6 hours which slides every 15mins).
Aggregate count of the number of http requests is evaluated in the 15mi
Thanks Fabian. That helped.
But I want to access the window start time. AFAIK, reduce can not give this
details as it doesn't have timewindow object passed to the reduce method.
How can I achieve this ?
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.n
Hi,
we have a requirement where we need to aggregate the data every 10mins and
write ONCE the aggregated results to the elastic search. Right now, we are
iterating over the iterable to make a count of different status codes to do
this. Is there a better way to count different status codes.
public
Hi,
I don't see much discussion on Anomaly detection using Flink. we are working
on a project where we need to monitor the server logs in real time. If there
is any sudden spike in the number of transactions(Unusual), server errors,
we need to create an alert.
1. How can we best achieve this?
2.
11 matches
Mail list logo