Does that mean you are doing 600 rows/sec per process or 600/sec total
across all processes?

On Mon, Sep 27, 2010 at 3:14 PM, Alaa Zubaidi <alaa.zuba...@pdf.com> wrote:
>  Its actually split to 8 different processes that are doing the insertion.
>
> Thanks
>
> On 9/27/2010 2:03 PM, Peter Schuller wrote:
>>
>> [note: i put user@ back on CC but I'm not quoting the source code]
>>
>>> Here is the code I am using (this is only for testing Cassandra it is not
>>> going the be used in production) I am new to Java, but I tested this and
>>> it
>>> seems to work fine when running for short amount of time:
>>
>> If you mean to ask about how to distributed writes - the general
>> recommendation is to use a high-level Cassandra client (such as Hector
>> at http://github.com/rantav/hector or Pelops at
>> http://github.com/s7/scale7-pelops) rather than using the Thrift API
>> directly. This is probably especially a good idea if you're new to
>> Java as you say.
>>
>> But in any case, if you're having performance issues w.r.t. the write
>> speed - are you in fact doing writes concurrently or is it a single
>> sequential client doing the insertions? If you are maxing out without
>> being disk bound, make sure that in addition to spreading writes
>> across all nodes in the cluster, you are submitting writes with
>> sufficient concurrency to allow Cassandra to scale to use available
>> CPU across all cores.
>>
>
> --
> Alaa Zubaidi
> PDF Solutions, Inc.
> 333 West San Carlos Street, Suite 700
> San Jose, CA 95110  USA
> Tel: 408-283-5639 (or 408-280-7900 x5639)
> fax: 408-938-6479
> email: alaa.zuba...@pdf.com
>
>
>

Reply via email to