Flink's programming model and APIs are based on the concept of data flows.
>From what I get by looking at the website, Ignite rather follows a grid
computing approach. I don't think that these concepts go very well
together.
I think using Ignite as a data source / sink or distributed hash table is
I was looking at this great example and I'd like to ask you which
serialization framework is the best if I have to serialize
Tuple3 with Parquet.
The syntax I like the most is the Thrift one but I can't see all the pros
and cons of using it and I'd like to hear your opinion here.
Thanks in advance
Did anyone read these:
https://cloud.google.com/dataflow/model/windowing,
https://cloud.google.com/dataflow/model/triggers ?
The semantics seem very straightforward and I'm sure the google guys
spent some time thinking this through. :D
On Mon, Apr 20, 2015 at 3:43 PM, Stephan Ewen wrote:
> Perfe
Interesting read. Thanks for the pointer.
Take home message (in my understanding):
- they support wall-clock, attribute-ts, and count windows
-> default is attribute-ts (and not wall-clock as in Flink)
-> it is not specified, if a global order is applied to windows, but I
doubt it, because o
There is a simple reason for that: They don't support joins. :D
They support n-ary co-group, however. This is implemented using
tagging and a group-by-key operation. So only elements in the same
window can end up in the same co-grouped result.
On Fri, Apr 24, 2015 at 3:51 PM, Matthias J. Sax
wro
Vikhyat Korrapati created FLINK-1938:
Summary: Add Grunt for building the front-end
Key: FLINK-1938
URL: https://issues.apache.org/jira/browse/FLINK-1938
Project: Flink
Issue Type: Improv
Hey everyone,
I was following the documentation on how to create a new runtime operator
and I noticed that all the links to the classes on Github return 404.
http://ci.apache.org/projects/flink/flink-docs-master/internals/add_operator.html
Eventually, I started to check the code out directly fro
Hi!
I think this refers only to the classes in the previous "compiler", now
"optimizer" project. That happened during a refactoring.
Sorry about that. I'll try to get to some time in the next days...
Greetings,
Stephan
On Fri, Apr 24, 2015 at 5:00 PM, Andra Lungu wrote:
> Hey everyone,
>
> I
Hi
I have a small problem with doing a custom join, that I would need some help
with. Maybe I'm also approaching the problem wrong.
So basically I have two dataset.
The simplified example: The first one has a start and end value. The second
dataset is just a list of ordered numbers and some value
Hi Flavio,
in Thrift you can try:
struct FlavioTuple {
1: optional string f1;
2: optional string f2;
3: optional list f3;
}
See: http://diwakergupta.github.io/thrift-missing-guide/
I like Thrift the most, because the API for Thrift in Parquet is the
easiest.
Have fun with Parquet :
Thanks Felix,
Thanks fir the response!
I'm looking forward to use it!
On Apr 24, 2015 9:01 PM, "Felix Neutatz" wrote:
> Hi Flavio,
>
> in Thrift you can try:
>
> struct FlavioTuple {
> 1: optional string f1;
> 2: optional string f2;
> 3: optional list f3;
> }
>
> See: http://diwakergu
11 matches
Mail list logo