Is there any way of limiting the number of events that are passed into the call 
to the put(Collection<SinkRecord>) method?

I'm writing a set of events to Kafka via a source Connector/Task and reading 
these from a sink Connector/Task.
If I generate of the order of 10k events the number of SinkRecords passed to 
the put method starts off very low but quickly rises in large increments such 
that 9k events are passed to a later invocation of the put method.

Furthermore, processing a large number of events in a single call (I'm writing 
to Elasticsearch) appears to cause the source task poll() method to timeout, 
raising a CommitFailedException which, incidentally, I can't see how to catch.

Thanks for any help you can provide,
David

Reply via email to