I think these would definitely be useful statistics to have and I've tried
to do similar tests! The biggest difference is probably going to be the
hardware specs on whatever cluster you decide to run it on. Maybe
benchmarks performed on different AWS servers would be helpful too, but I'd
like to se
I would be using the servers available at my place of work. I dont have
access to AWS servers. I would starting off with a small number of nodes in
the cluster and then plot a graph with x-axis as the number of servers in
the cluster and y-axis as the number of topics with partitions, before the
cl
@Jiefu Gong,
Are the results of your tests available publicly?
Regards,
Prabhjot
On Tue, Jul 28, 2015 at 10:35 PM, Prabhjot Bharaj
wrote:
> I would be using the servers available at my place of work. I dont have
> access to AWS servers. I would starting off with a small number of nodes in
> th
Sure. I would be doing that.
I have seen that if I have 5-7 topics with 256 partitions each on a machine
with 4CPUs, 8GB RAM, the jvm crashes with OutOfMemoryError
And, this happens in many machines in the cluster. (I'll update the exact
number as well)
I was wondering how I could tune the JVM to
Kafka store it meta data in Zookeeper Cluster so evaluate "how many total
number of topics and partitions can be created in a cluster " maybe same
as to test Zookeeper's expansibility and disk IO performance .
2015-07-28 13:51 GMT+08:00 Prabhjot Bharaj :
> Hi,
>
> I'm looking forward to a bench
Hi,
I'm looking forward to a benchmark which can explain how many total number
of topics and partitions can be created in a cluster of n nodes, given the
message size varies between x and y bytes and how does it vary with varying
heap sizes and how it affects the system performance.
e.g. the resu