fetched back. This is the Akka
>>> framesize which is 10MB by default but could be adapted.
>>>
>>> It would look similar to this:
>>>
>>> ExecutionEnvironment env = ...
>>>
>>> DataSet a = env.readFile(...);
>>> List b = a.
t();
>>
>> DataSet c;
>> if(b.get(0).equals(...)) {
>> c = env.readFile(someFile);
>> } else {
>> c = env.readFile(someOtherFile);
>> }
>>
>> c.map().groupBy().reduce()writeAsFile(result);
>>
>> env.execute();
>>
>> Cheers, F
equals(...)) {
> c = env.readFile(someFile);
> } else {
> c = env.readFile(someOtherFile);
> }
>
> c.map().groupBy().reduce()writeAsFile(result);
>
> env.execute();
>
> Cheers, Fabian
>
> 2015-10-30 22:40 GMT+01:00 Giacomo Licari :
>
>> Hi guys,
&g
-30 22:40 GMT+01:00 Giacomo Licari :
> Hi guys,
> I would ask to you how could I create triggers in Flink.
>
> I would like to perform some operations on a dataset and according to some
> conditions, based on an attribute of a Pojo class or Tuple, execute some
> triggers.
Hi guys,
I would ask to you how could I create triggers in Flink.
I would like to perform some operations on a dataset and according to some
conditions, based on an attribute of a Pojo class or Tuple, execute some
triggers.
I mean, starting collecting other datasources' data and perfo