Hi Shikhar - many thanks - that works a treat :)

-----Original Message-----
From: Shikhar Bhushan [mailto:shik...@confluent.io] 
Sent: 06 January 2017 17:39
To: dev@kafka.apache.org
Subject: Re: KafkaConnect SinkTask::put

Sorry I forgot to specify, this needs to go into your Connect worker 
configuration.
On Fri, Jan 6, 2017 at 02:57 <david.frank...@bt.com> wrote:

> Hi Shikhar,
>
> I've just added this to ~config/consumer.properties in the Kafka 
> folder but it doesn't appear to have made any difference.  Have I put 
> it in the wrong place?
>
> Thanks again,
> David
>
> -----Original Message-----
> From: Shikhar Bhushan [mailto:shik...@confluent.io]
> Sent: 05 January 2017 18:12
> To: dev@kafka.apache.org
> Subject: Re: KafkaConnect SinkTask::put
>
> Hi David,
>
> You can override the underlying consumer's `max.poll.records` setting 
> for this. E.g.
>     consumer.max.poll.records=500
>
> Best,
>
> Shikhar
>
> On Thu, Jan 5, 2017 at 3:59 AM <david.frank...@bt.com> wrote:
>
> > Is there any way of limiting the number of events that are passed 
> > into the call to the put(Collection<SinkRecord>) method?
> >
> > I'm writing a set of events to Kafka via a source Connector/Task and 
> > reading these from a sink Connector/Task.
> > If I generate of the order of 10k events the number of SinkRecords 
> > passed to the put method starts off very low but quickly rises in 
> > large increments such that 9k events are passed to a later 
> > invocation of
> the put method.
> >
> > Furthermore, processing a large number of events in a single call 
> > (I'm writing to Elasticsearch) appears to cause the source task 
> > poll() method to timeout, raising a CommitFailedException which, 
> > incidentally, I can't see how to catch.
> >
> > Thanks for any help you can provide, David
> >
>

Reply via email to