Hi All,

I'm new to Kafka and in the process of writing a producer. Just to give you
the context, my producer reads a binary file, decodes it according to a
predefined structure (message length followed by the message) and publishes
the decoded messages based on its type to the topic. For instance consider
there are 7 different message types in the file, example message1,
message2.... message7 and I have created 7 topics for each of the message.

I'm using only one producer object to send each message to different topics.

I also created a partition class, which uses round robin technique to
select the partition based on their size.  Below is the partition code:

/******Partition Code*******/

public int partition(Object key, int a_numPartitions){

     int partitionId = counter.incrementAndGet() % a_numPartitions;
     if(counter.get()> 65536 ){
               counter.set(0);
     }
      return partitionId;
}

/********ends**************/

Right now I'm working with three brokers and replication factor set to 2 .
I'm getting a throughput of 4k message per second. Each message is around
86 bytes.

My question to you is, how can I increase the throughput at the producer
end? Do I need to create multiple producer object ? Will multithreading
help? If yes, what's the best way of doing it? What do you suggest to
improve the performance.

Any help would be highly appreciated.


Thanks,
Gaurav Sharma

Reply via email to