Is 1.2 JBOB and april fools joke?  Heh, seriously though, I have no idea what 
you are talking about there.  I am trying to get raw disk performance with no 
cassandra involved before involving cassandra…..which is the next step.

Thanks,
Dean

From: aaron morton <aa...@thelastpickle.com<mailto:aa...@thelastpickle.com>>
Reply-To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" 
<user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Date: Monday, April 1, 2013 11:01 PM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" 
<user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Subject: Re: how to test our transfer speeds

If not, maybe I just generate the same 1,000,000 files on each machine, then 
randomly delete 1/2 the files and stream them from the other machine as writing 
those files would all be in random locations again forcing a much worse 
measurement of MB/sec I would think.
Not sure I understand the question. But you could just scrub the data off a 
node and rebuild it.

Note that streaming is throttled, and it will also generate compaction.

He has twenty 1T drives on each machine and I think he also tried with one 1T 
drive seeing the same performance which makes sense if writing sequentially
Are you using the 1.2 JBOB configuration?

Cheers

-----------------
Aaron Morton
Freelance Cassandra Consultant
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 1/04/2013, at 11:01 PM, "Hiller, Dean" 
<dean.hil...@nrel.gov<mailto:dean.hil...@nrel.gov>> wrote:

(we plan on running similar performance tests on cassandra but wanted to 
understand the raw foot print first)…..

Someone in ops was doing a test transferring 1T of data from one node to 
another.  I had a huge concern I emailed him that this could end up being a 
completely sequential write not testing random access speeds.  He has twenty 1T 
drives on each machine and I think he also tried with one 1T drive seeing the 
same performance which makes sense if writing sequentially.  Does anyone know 
of something that could generate a random access pattern such that we could 
time that?  Right now, he was measuring 253MB / second from the time it took 
and the 1T of data.  I would like to find the much worse case of course.

If not, maybe I just generate the same 1,000,000 files on each machine, then 
randomly delete 1/2 the files and stream them from the other machine as writing 
those files would all be in random locations again forcing a much worse 
measurement of MB/sec I would think.

Thoughts?

Thanks,
Dean

Reply via email to