Yeah, looks like it's an issue with the plugin. I don't have any experience
of it, sorry.
On Tue, 6 Apr. 2021, 12:32 am Himanshu Shukla,
wrote:
> bootstrap.servers=b-1:9092,b-2:9092
> group.id=connect-cluster
> key.converter=org.apache.kafka.connect.json.JsonConverter
> value.converter=org.apach
bootstrap.servers=b-1:9092,b-2:9092
group.id=connect-cluster
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.topic=connect-offsets-2
offset.stor
Hi Himanshu,
Have you adjusted your consumer properties as the error message suggested?
Alternatively reduce your your consumer.max.poll.records in the worker
config.
Basically, the sink you're using is spending too much time processing in
the poll loop, so either tweak the properties as mention
Did anyone face it before? The connector URL is giving 500 request time out.
On Thu, Apr 1, 2021 at 9:55 AM Himanshu Shukla
wrote:
> Hi,
> I am using kafka-connect-file-pulse connector and scanning around 20K
> files. After the scan step, the whole connect cluster is becoming
> unresponsive. I c