Thanks Shantanu for your response.
The size is the business req. which might also increase later and I am
using max.request .size as 1Gb
I will try the compression of the data and see the performance.
and sharing the producer blocks the other threads as the data is big and
also it leads to resourc
Firstly, record size of 150mb is too big. I am quite sure your timeout
exceptions are due to such a large record. There is a setting in producer
and broker config which allows you to specify max message size in bytes.
But still records each of size 150mb might lead to problems with increasing
volum
Hi All,
I am sending the multiple records to the same topic.
I have the two approaches
1)Sharing the producer with all the threads
2) creating a new producer for every thread.
I am sending the records of size ~150Mb on multiple request.
I am running the cluster and app on my local machine with 3