Hi all,
I wonder whether limitations mentioned in [1] regarding Kafka scalability
in number of topics are still valid. For example, did the recent changes
in the design around usage of ZooKeeper versus internal membership
protocol affected the scalability - one way or the other?
Also, it seems
The SimpleBenchmark included in Apache Kafka [1] is, as Eno mentioned,
quite rudimentary. It does some simple benchmark runs of Kafka's
standard/native producer and consumer clients, and then some Kafka Streams
specific ones. So you can compare somewhat between the native clients and
Kafka Stream
Hm, cool. Thanks Gwen and Guozhang.
Loose-coupling (especially with regard to the number of instances running),
batch inserts, and exactly-once are very convincing. Dynamic schema is
interesting / scary, but, I'd need a dynamic app on the other side which I
don't have. :-)
I'll plod along with
Hi Adrienne,
Also you might want to have a look at the SimpleBenchmark.java file included
with Kafka Streams (org.apache.kafka.streams.perf. SimpleBenchmark). It does
some simple measurements of consumer, producer and Kafka Streams throughput.
Thanks
Eno
> On 22 Jul 2016, at 07:21, David Garci
You should probably just put reporting in your app. Dropwizard, logs…etc. You
can also look at Kafka JMX consumer metrics (assuming you don’t have too many
consumers).
-David
On 7/22/16, 9:13 AM, "Adrienne Kole" wrote:
Hi,
How can I measure the latency and throughput in Kafka
Hi,
How can I measure the latency and throughput in Kafka Streams?
Cheers
Adrienne
Shekar,
you mentioned:
> The API should give different status at each part of the pipeline.
> At the ingestion, the API responds with "submitted"
> During the progression, the API returns "in progress"
> After successful completion, the API returns "Success"
May I ask what your motivation is to