;
//or seek(akeyprefix, akeysuffix);
for(byte[] key : iter) {}
}
Thanks for your time!
On Fri, May 19, 2017 at 10:03 AM, Sand Stone wrote:
> Thanks Gordon and Fabian.
>
> The enriching data is really reference data, e.g. the reverseIP
> database. It's
k state
>> for the CoMapFunction / CoFlatMapFunction. The actual input stream records
>> can just access that registered state locally.
>>
>> Cheers,
>> Gordon
>>
>>
>> On 19 May 2017 at 7:11:07 AM, Sand Stone (sand.m.st...@gmail.com) wrote:
>>
&g
Hi. Say I have a few reference data sets need to be used for a
streaming job. The sizes range between 10M-10GB. The data is not
static, will be refreshed at minutes and/or day intervals.
With the new advancements in Flink, it seems there are quite a few options.
A. Store all the data in an exte
apshotState as an example, where Kafka partition
> offsets are the operator state and individual offsets become list elements so
> that they can be individually redistributed.
>
> Best,
> Stefan
>
>
>> Am 26.04.2017 um 17:24 schrieb Sand Stone :
>>
>> To be
cumentation page, it
> might be created in next weeks after the feature freeze.
>
> Regarding the MapState I loop in Stefan, maybe he can give you some advice
> here.
>
> Timo
>
>
>
>
> Am 26/04/17 um 04:25 schrieb Sand Stone:
>
>> Hi, Flink newbie here.
>>
Hi, Flink newbie here.
I played with the API (built from GitHub master), I encountered some
issues but I am not sure if they are limitations or actually by
design:
1. the data stream reduce method does not take a
RichReduceFunction. The code compiles but throws runtime exception
when submitted