Hi Team,
Is there support for Distributed Cache in StreamExecutionEnvironment? I
didn't find any such things in StreamExecutionEnvironment. I am using flink
1.1.1
I found distributed cache for ExecutionEnvironment but not for
StreamExecutionEnvironment
If Yes, Can anybody tell me how to use same
Not sure if I got your requirements right, but would this work?
KeyedStream ks1 = ds1.keyBy("*") ;
KeyedStream, String> ks2 = ds2.flatMap(split T into k-v
pairs).keyBy(0);
ks1.connect(ks2).flatMap(X)
X is a CoFlatMapFunction that inserts and removes elements from ks2 into a
key-value state membe
Hi Frank,
I didn't tried to run the code, but this does not show a compiler error in
IntelliJ:
> input.map( mapfunc2 _ )
Decomposing the Tuple2 into two separate arguments does only work with
Scala's pattern matching technique (this is the second approach I posted).
The Java API is not capable o
Hi, I'm interested in helping out on this project. I also want to implement a
continuous time-boxed sliding window, my current use case is a 60-second
sliding window that moves whenever a newer event arrives, discarding any
late events that arrive outside the current window, but *also* re-triggerin
Hello Fabian,
Thanks, your solution works indeed. however, i don't understand why.
When i replace the lambda by an explicit function
def mapfunc2(pair: Tuple2[BSONWritable, BSONWritable]) : String = {
return pair._1.toString
}
input.map mapfunc2
i get the error below, which seemingly indica
Hi Fabian,
First of all thanks for all your prompt responses. With regards to 2)
Multiple looks ups, I have to clarify what I mean by that...
DS1 elementKeyStream = stream1.map(String<>); this maps each
of the streaming elements into string mapped value...
DS2 = stream2.xxx(); //
Hi Frank,
input should be of DataSet[(BSONWritable, BSONWritable)], so a
Tuple2[BSONWritable, BSONWritable], right?
Something like this should work:
input.map( pair => pair._1.toString )
Pair is a Tuple2[BSONWritable, BSONWritable], and pair._1 accesses the key
of the pair.
Alternatively you c
Hello,
i'm new to flink, and i'm trying to get a mongodb hadoop input format
working in scala. However, i get lost in the scala generics system ...
could somebody help me ?
Code is below, neither version works (compile error at the "map" call),
either because of method not applicable either becau
Hi all.
I'm pretty new to Apache Flink. I'd like to ask you how we can manage
running jobs. I've noticed one possibility
using UI running on 8081 by default, but I'm aiming to some java API. My
problem is that I should be able
to dynamically add and remove jobs which are CEP rules because this
I would assign timestamps directly at the source.
Timestamps are not striped of by operators.
Reassigning timestamps somewhere in the middle of a job can cause very
unexpected results.
2016-09-08 9:32 GMT+02:00 Dong-iL, Kim :
> Thanks for replying. pushpendra.
> The assignTimestamp method return
I wanna assign timestamp after keyBy.
because the stream does not aligned before keyBy.
I’ve already tested as like your code.
It occured many warnings that timestamp monotony violated.
> On Sep 8, 2016, at 4:32 PM, Dong-iL, Kim wrote:
>
> Thanks for replying. pushpendra.
> The assignTimestamp m
Thanks for replying. pushpendra.
The assignTimestamp method returns not KeyedStream but DataStream.
so I cannot use windowing.
is it possible casting to KeyedStream?
Regards
> On Sep 8, 2016, at 3:12 PM, pushpendra.jaiswal
> wrote:
>
> Please refer
> https://ci.apache.org/projects/flink/flink-d
Hi Pushpendra,
1. Queryable state is an upcoming feature and not part of an official
release yet. With queryable state you can query operator state from outside
the application.
2. Have you had a look at the CoFlatMap operator? This operator "connects"
two streams and allows to have state which i
13 matches
Mail list logo