Hi Hemant,
maybe this thread from last year could also help you:
http://mail-archives.apache.org/mod_mbox/flink-user/201903.mbox/%3c2df93e1c-ae46-47ca-9c62-0d26b2b3d...@gmail.com%3E
Someone also proposes a skeleton of the code there.
Regards,
Timo
On 04.02.20 08:10, hemant singh wrote:
Than
Thanks, I'll check it out.
On Tue, Feb 4, 2020 at 12:30 PM 容祖儿 wrote:
> you can customize a Sink function (implement SinkFunction) that's not so
> hard.
>
> regards.
>
> On Tue, Feb 4, 2020 at 2:38 PM hemant singh wrote:
>
>> Hi All,
>>
>> I am using dynamodb as a sink for one of the metrics pi
you can customize a Sink function (implement SinkFunction) that's not so
hard.
regards.
On Tue, Feb 4, 2020 at 2:38 PM hemant singh wrote:
> Hi All,
>
> I am using dynamodb as a sink for one of the metrics pipeline. Wanted to
> understand if there are any existing connectors. I did searched an
Hi All,
I am using dynamodb as a sink for one of the metrics pipeline. Wanted to
understand if there are any existing connectors. I did searched and could
not find one. If none exists, has anyone implemented one and any hints on
that direction will help a lot.
Thanks,
Hemant
Thank you Addison, this is very helpful.
On Fri, Mar 22, 2019 at 10:12 AM Addison Higham wrote:
> Our implementation has quite a bit more going on just to deal with
> serialization of types, but here is pretty much the core of what we do in
> (psuedo) scala:
>
> class DynamoSink[...](...) extend
Our implementation has quite a bit more going on just to deal with
serialization of types, but here is pretty much the core of what we do in
(psuedo) scala:
class DynamoSink[...](...) extends RichProcessFunction[T] with
ProcessingTimeCallback {
private var curTimer: Option[Long] = None
privat
Hi there,
We have implemented a dynamo sink, have had no real issues, but obviously,
it is at-least-once and so we work around that by just structuring our
transformations so that they produce idempotent writes.
What we do is pretty similar to what you suggest, we collect the records in
a buffer
Hello,
I am currently using Dynamodb as a persistent store in my application, and in
process of transforming the application into streaming pipeline utilizing
Flink. Pipeline is simple and stateless, consume data from Kafka, apply some
transformation and write records to Dynamodb.
I would wan