Hi Team,
I want to know how tasknumber & numtasks help in opening db connection in
Flink JDBC JDBCOutputFormat Open. I checked with docs where it says:
taskNumber - The number of the parallel instance.numTasks - The number of
parallel tasks.But couldn't get clear idea among parallel instance &
par
Thanks Maximilian. I implemented same & it worked for me. I was under
impression that RawSchema is available from flink.
Regards,
Swapnil
On Mon, Sep 5, 2016 at 8:48 PM, Maximilian Michels wrote:
> Just implement DeserializationSchema and return the byte array from
> Kafka. Byte array serializa
Thanks Robert. It worked for me. I have used RichFunction's open() method.
Regards,
Swapnil
On Fri, Sep 9, 2016 at 3:40 PM, Robert Metzger wrote:
> Hi Swapnil,
>
> there's no support for something like DistributedCache in the DataStream
> API.
> However, as a workaround, you can rely on the Ric
Hi Philipp,
the easist way is a RichMap. In the open()-Method you can load the
relevant database table into memory (e.g. a HashMap). In the
map()-method you than just look up the entry in the HashMap.
Of course, this only works if the dataset is small enough to fit in
memory. Is it?
Cheers,
Kon
In the way that FLIP-2 would solve this problem, secondAggregate would ignore
the early firing updates from firstAggregate to prevent double-counting,
correct? If that's the case, I am trying to understand why we'd want to
trigger early-fires every 30 seconds for the secondAggregate if it's only
ac
Hi there,
I have a data stream (coming from Kafka) that contains information which I
want to enrich with information that sits in a database before I handover
the enriched tuple to a sink.
How would I do that ?
I was thinking of somehow combining my streaming job with a JDBC input but
wasn't very s