Hi,
I have simple job that consumes events from Kafka topic and process events
with filter/flat-map only(i.e. no aggregation, no windows, no private state)
The most important constraint in my setup is to continue processing no
matter what(i.e. stopping for few seconds to cancel job and restart it
Is there any chance any of these will make it into 1.1?
https://issues.apache.org/jira/browse/FLINK-3514 (Add support for
slowly changing streaming broadcast variables)
https://issues.apache.org/jira/browse/FLINK-2320 (Enable DataSet
DataStream Joins)
Thanks.
On Wed, May 18, 2016 at 11:02 AM, s
I think union is what you are looking for.
Note that all data sets must be of the same type.
2016-05-18 16:15 GMT+02:00 Ritesh Kumar Singh :
> Hi,
>
> How can I perform a reduce operation on a group of datasets using Flink?
> Let's say my map function gives out n datasets: d1, d2, ... dN
> Now I
Hi
Is there already a version out for gradle?
compile 'org.apache.flink:flink-streaming-scala_2.11:1.1-SNAPSHOT' doesn't work
I'm using scala on eclipse.
And I would like to test the snapshot edition.
Any suggestions?
Cheers
Simon
> On 18 May 2016, at 14:51, Ovidiu-Cristian MARCU
> wrote:
>
Hi,
How can I perform a reduce operation on a group of datasets using Flink?
Let's say my map function gives out n datasets: d1, d2, ... dN
Now I wish to perform my reduce operation on all the N datasets at once and
not on an individual level. The only way I figured out till now is using
the union
Hi
We are also very interested on the SQL (SQL on Streaming) future support in the
next release (even if it is partial work that works :) )
Thank you!
Best,
Ovidiu
> On 18 May 2016, at 14:42, Stephan Ewen wrote:
>
> Hi!
>
> That question is coming up more and more.
> I think we should start
Hi!
That question is coming up more and more.
I think we should start the public discussion about the 1.1 release
planning, scope, and release manager in the next days.
Stephan
On Wed, May 18, 2016 at 1:53 PM, simon peyer wrote:
> Hi Marton
>
> Thanks for the answer.
> I'm looking for the Ses
Hi Marton
Thanks for the answer.
I'm looking for the Session Windows feature described in
http://data-artisans.com/session-windowing-in-flink/.
I think its already available in the snapshot version, but I would like to have
it in a stable version.
This is why I'm interested in the release date
Hey Simon,
The general policy is that the community aims to release a major version
every 3 months. That would mean the next release coming out in early to mid
June. I am not aware of the 1.1.0 schedule yet, but it is about time to
start the discussion on that.
Are you looking for a specific feat
I prepared a small example that outlines how something like this could be
implemented:
https://gist.github.com/aljoscha/36afedce40abf8ae92b92d4355809ff1
It doesn't include all your requirements, such as count per wall, etc. But
this should get you started on the right path.
I hope this helps!
On
Hi guys
When are you expecting to release a stable version of flink 1.1?
--Cheers
Simon
Hi Andrew,
the reason why the program doesn't fail (and cannot fail, with the current
architecture) is that the partitioned state is dynamic/lazy. For example,
the count trigger might have a partitioned state called "count" that it
uses to keep track of the count. The time trigger requires no state
Hi,
there is no guarantee on the order in which the elements are processed. So
it can happen that most elements from input one get processed before
elements from the feedback get processed. In case of an infinite first
input this will not happen, of of course.
For understanding what's going on it
Hi,
what you can also do is use a RichMapFunction. Rich functions have
open()/close() methods that get called before the first element and the
last element, respectively.
See here
https://ci.apache.org/projects/flink/flink-docs-master/apis/common/index.html#specifying-transformation-functions
for
m
Hi,
As you mentioned, my program contains a loop with infinite iterations. I
was not able to stop by kill -9 command. Instead I killed the entire flink
cluster job. I set up the cluster again with one master and three slaves.
Now when I try to run the code, it is showing the following exceptio
15 matches
Mail list logo