The metrics are exposed in the JVM the producer is running within as
Mbeans. The long string I gave you is the relevant MBean object name. You
can connect to the JVM using JConsole to view the MBeans. There are also
multiple libraries that will scrape a JVM via JMX to extract values from
MBeans.

If you're not familiar with JMX or JConsole, there's plenty of great
documentation on the Internet, have a Google :)

On Thu, 20 Feb. 2020, 11:51 pm Sunil CHAUDHARI,
<sunilchaudh...@dbs.com.invalid> wrote:

> Hi Liam Clarke,
> Sorry but this is bit unclear for me.
> Can you please elaborate your answer? I am beginner to Kafka.
> " Producers emit metrics via JMX ":
>         - How to enable this? I have kafka-Manager. Can I make use of
> kafka-manager? How?
> “kafka.producer:type=producer-metrics,client-id=(.+),topic=(.+)record-send-rate”
> please help to explain this.
>
> Regards,
> Sunil.
>
> -----Original Message-----
> From: Liam Clarke <liam.cla...@adscale.co.nz>
> Sent: Thursday, February 20, 2020 11:16 AM
> To: users@kafka.apache.org
> Subject: [External] Re: Urgent helep please! How to measure producer and
> consumer throughput for single partition?
>
> This mail originated from an external party outside DBS - mailto:
> users-return-39554-sunilchaudhari=dbs....@kafka.apache.org. Do not click
> on links or open attachments unless you recognize the sender and know the
> content is safe.
>
> Hi Sunil,
>
> Producers emit metrics via JMX that will help you, assuming that your
> producers are using a round robin partition assignment strategy, you could
> divide this metric by your number of partitions,
>
>
> kafka.producer:type=producer-metrics,client-id=(.+),topic=(.+)record-send-rate
>
> Kind regards,
>
> Liam Clarke
>
> On Thu, 20 Feb. 2020, 5:57 pm Sunil CHAUDHARI, <mailto:
> sunilchaudh...@dbs.com.invalid> wrote:
>
> > Hi
> > I was referring to the article by Mr. June Rao about partitions in
> > kafka cluster.
> > https://www.confluent.io/blog/how-choose-number-topics-partitions-kafk
> > a-cluster/
> >
> > "A rough formula for picking the number of partitions is based on
> > throughput. You measure the throughout that you can achieve on a
> > single partition for production (call it p) and consumption (call it
> > c). Let's say your target throughput is t. Then you need to have at
> > least max(t/p, t/c) partitions."
> >
> > I have the data pipeline as below.
> >
> > Filebeat-->Kafka-->Logstash-->Elasticsearch
> > There are many filebeat agents sending data to kafka. I want to
> > understand , how can I measure the events per seconds getting written
> > to Kafka? This will help me to know 'p'  in above formula.
> > I can measure the consumer throughput by monitoring logsatsh pipelines
> > on Kibana. So it will give me 'c' in above formula.
> >
> > I know target throughput in my cluster, that is 't'. 30k events/s.
> >
> > Please let me know if I am going wrong?
> >
> > Regards,
> > Sunil.
> > CONFIDENTIAL NOTE:
> > The information contained in this email is intended only for the use
> > of the individual or entity named above and may contain information
> > that is privileged, confidential and exempt from disclosure under
> applicable law.
> > If the reader of this message is not the intended recipient, you are
> > hereby notified that any dissemination, distribution or copying of
> > this communication is strictly prohibited. If you have received this
> > message in error, please immediately notify the sender and delete the
> mail. Thank you.
> >
>
>
> CONFIDENTIAL NOTE:
> The information contained in this email is intended only for the use of
> the individual or entity named above and may contain information that is
> privileged, confidential and exempt from disclosure under applicable law.
> If the reader of this message is not the intended recipient, you are hereby
> notified that any dissemination, distribution or copying of this
> communication is strictly prohibited. If you have received this message in
> error, please immediately notify the sender and delete the mail. Thank you.
>

Reply via email to