Ah, I think now I get your problem. You could manually implement batching 
inside your SinkFunction, The SinkFunction would batch values in memory and 
periodically (based on the count of values and on a timeout) send these values 
as a single batch to MySQL. To ensure that data is not lost you can implement 
the CheckpointedFunction interface and make sure to always flush to MySQL when 
a snapshot is happening.

Does that help?

> On 8. Jun 2017, at 11:46, Nico Kruber <n...@data-artisans.com> wrote:
> 
> How about using asynchronous I/O operations?
> 
> https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/stream/
> asyncio.html
> 
> 
> Nico
> 
> On Tuesday, 6 June 2017 16:22:31 CEST rhashmi wrote:
>> because of parallelism i am seeing db contention. Wondering if i can merge
>> sink of multiple windows and insert in batch.
>> 
>> 
>> 
>> --
>> View this message in context:
>> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Use-Sin
>> gle-Sink-For-All-windows-tp13475p13525.html Sent from the Apache Flink User
>> Mailing List archive. mailing list archive at Nabble.com.
> 

Reply via email to