Brilliant Fabian, thanks a lot! This looks exactly like what I'm after. One
thing: the DatStream API I'm using (0.9.1) does not have a keyBy() method.
Presumably this is from newer API?
On Tue, Nov 10, 2015 at 1:11 PM, Fabian Hueske wrote:
> Hi Nick,
>
> I think you can do this with Flink quite
Hi Nick,
I think you can do this with Flink quite similar to how it is explained in
the Samza documentation by using a stateful CoFlatMapFunction [1], [2].
Please have a look at this snippet [3].
This code implements an updateable stream filter. The first stream is
filtered by words from the seco
Thanks, Stephan.
I'll give those two workarounds a try!
On Tue, Nov 10, 2015 at 2:18 PM, Stephan Ewen wrote:
> Hi Cory!
>
> There is no flag to define the BlobServer port right now, but we should
> definitely add this: https://issues.apache.org/jira/browse/FLINK-2996
>
> If your setup is such t
Hi Cory!
There is no flag to define the BlobServer port right now, but we should
definitely add this: https://issues.apache.org/jira/browse/FLINK-2996
If your setup is such that the firewall problem is only between client and
master node (and the workers can reach the master on all ports), then y
I'm also running into an issue with a non-YARN cluster. When submitting a
JAR to Flink, we'll need to have an arbitrary port open on all of the
hosts, which we don't know about until the socket attempts to bind; a bit
of a problem for us.
Are there ways to submit a JAR to Flink that bypasses the n
Hello,
I'm interested in implementing a table/stream join, very similar to what is
described in the "Table-stream join" section of the Samza key-value state
documentation [0]. Conceptually, this would be an extension of the example
provided in the javadocs for RichFunction#open [1], where I have a
Ok great! Thanks for the explanation Fabian..I was sure that there were
something wrong with this approach :)
On Tue, Nov 10, 2015 at 3:34 PM, Fabian Hueske wrote:
> Hi Flavio,
>
> this will not work out of the box. If you extend a Flink tuple and add
> additional fields, the type will be recogn
Hi Flavio,
this will not work out of the box. If you extend a Flink tuple and add
additional fields, the type will be recognized as tuple and the
TupleSerializer will be used to serialize and deserialize the record. Since
the TupleSerializer is not aware of your additional fields it will not
seria
Hi all,
in my code I create my model objects extending some TupleX in order to
perform joins etc but now I have to add additional info to those classes.
Since those attributes are not involved in any Flink operator (just in the
end in my UDF) I was thinking to add them as fields instead of increas