Firstly, record size of 150mb is too big. I am quite sure your timeout
exceptions are due to such a large record. There is a setting in producer
and broker config which allows you to specify max message size in bytes.
But still records each of size 150mb might lead to problems with increasing
volume. You need to look at how you can reduce your message size.

Kafka producer is thread safe and according to documentation you will get
best performance if you share producer with multiple threads. Don't
initiate a new kafka producer for each of your thread.

On Fri, Aug 17, 2018 at 9:26 PM Pulkit Manchanda <pulkit....@gmail.com>
wrote:

> Hi All,
>
> I am sending the multiple records to the same topic.
> I have the two approaches
> 1)Sharing the producer with all the threads
> 2) creating a new producer for every thread.
>
> I am sending the records of size ~150Mb on multiple request.
> I am running the cluster and app on my local machine with 3 brokers and
> max.request .size 1Gb.
>
> While sending the records using the following code with approach 2)
> creating a new producer I am getting the network exception
> and when I use the approach 1) sharing the producer. I get the same network
> exception and sometimes Timeout too.
> I looked onto google and StackOverflow but didn't find any solution to the
> Network Exception.
>
> val metadata = producer.send(record).get()
>
>
> java.util.concurrent.ExecutionException:
> org.apache.kafka.common.errors.NetworkException: The server disconnected
> before a response was received.
> at
>
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
> at
>
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:64)
> at
>
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
> at service.KafkaService.sendRecordToKafka(KafkaService.scala:65)
>
>
> Any help will be appreciated.
>
> Thanks
> Pulkit
>

Reply via email to