Hi Cazhi,

Thanks for your reply! The database is DynamoDB. The connector I use is
https://github.com/klarna-incubator/flink-connector-dynamodb. My source is
a continuous event stream. My Flink version is 1.12.

Best,
Jing

On Tue, Dec 7, 2021 at 6:15 PM Caizhi Weng <tsreape...@gmail.com> wrote:

> Hi!
>
> Which database are you referring to? If there is no officially supported
> connector of this database you can create your own sink operator
> with GenericWriteAheadSink.
>
> Note that if you're using Flink < 1.14 and if your source is bounded (that
> is to say, your source will eventually come to an end and finishes the job)
> you might lose the last bit of result. See [1] for detail.
>
> [1] https://lists.apache.org/thread/qffl2pvnng9kkd51z5xp65x7ssnnm526
>
> Jing Lu <ajin...@gmail.com> 于2021年12月8日周三 05:51写道:
>
>> Hi, community
>>
>> I have a Kafka stream and want to use Flink for 10 minutes aggregation.
>> However, the number of events is large, and the writes are throttled for
>> the output database for a few seconds during an hour. I was thinking to
>> write from Flink to another Kafka stream and using another Flink app to
>> write to a database. Will this smooth the writing? What should I do for the
>> second Flink app?
>>
>>
>> Thanks,
>> Jing
>>
>

Reply via email to