On Sun, Jul 18, 2010 at 8:45 PM, Oren Benjamin <o...@clearspring.com> wrote:

> Thanks for the info.  Very helpful in validating what I've been seeing.  As
> for the scaling limit...
>
> >> The above was single node testing.  I'd expect to be able to add nodes
> and scale throughput.  Unfortunately, I seem to be running into a cap of
> 21,000 reads/s regardless of the number of nodes in the cluster.
> >
> > This is what I would expect if a single machine is handling all the
> > Thrift requests.  Are you spreading the client connections to all the
> > machines?
>
> Yes - in all tests I add all nodes in the cluster to the --nodes list.  The
> client requests are in fact being dispersed among all the nodes as evidenced
> by the intermittent TimedOutExceptions in the log which show up against the
> various nodes in the input list.  Could it be a result of all the virtual
> nodes being hosted on the same physical hardware?  Am I running into some
> connection limit?  I don't see anything pegged in the JMX stats.


It's unclear if you're using multiple client machines for stress.py or not,
a limitation of 24k/21k for a single quad-proc machine is normal in my
experience.

-Brandon

Reply via email to