I managed to run a few benchmarks. Servers r/s 1 64.5k 2 59.5k
The configuration: Client: Machine with four Quad Core Intel Xeon CPU E5520 @ 2.27Ghz cpus (total 16 cores), 4530 bogomips per core. 12 GB ECC corrected memory. Supermicro mainboard (not sure about exact type). Cassandra Servers: CPU: Intel Xeon X3450 Quad Core (total 4 cores). Memory: 16 GB DDR SDRAM DIMM 1333 Mhz ECC memory (4 x 4GB DIMMs). Disks: Three 1TB Samsung Spinpoint F3 HD103SJ 7300 rpm disks with 32MB internal cache in following configuration: single disk as root partition which hosts commitlog and two disks in Linux software RAID-0 which hosts the data. - All servers are connected to one switch which has dual 1Gbps trunk to another switch which hosts the client. Network utilization was monitored during testing and it wasn't even close to saturate any network component. - All machines are using SUN JDK 1.6.0 Update 21 under RedHat Enterprise Linux Server release 5.3 with latests updates. Python version is 2.6.3. During first test with one client and one server: - Test took 1163 seconds, resulting 64 488 reads per second. - Cassandra server RF is set to one (RF=1) - cassandra java used about 400% CPU resources which resulted with averate load of 8.0: - 24% user, 20% system, 50% idle and ~6.3% software interrupts - I used the David's modified stress.py from http://gist.github.com/481966 with args: ./stress.py -o read -t 50 -d $NODELIST -n 75000000 -k -i 2 - Java was started with following parameters: -XX:+UseCompressedOops -Xmx2046m -XX:TargetSurvivorRatio=90 -XX:+AggressiveOpts -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled I added another server to the cluster and tried to ran the client but using just the first server. This way I made sure that the server will redirect requests to the other server when the key hashes itself into it, thus making sure that cassandra works as it should. This decreased the performance to about 52.6kr/s which was expected, as some of the requests make one hop more before they can be answered. Third test was with stress.py hammering both nodes. While I was running the third test I started to thinking. With two nodes in a perfectly balanced ring with RF=1 the incoming read request has 50% change to hit the local server and 50% change to hit the another server, causing an extra network hop. When having two servers in the ring and client hitting only one, will be just as fast as having two servers and client hitting both of them. What happends when we add 3rd server to the ring and balance? the same, as far as I understant. Each incoming read request from the client has just 33% change to hit the local server and 66% that it needs an extra hop (still assuming that RF=1), so this doesn't make the cluster any faster. If my thinking is correct, we can keep adding servers and it will just make the cluster a bit slower because the request has less change to be served on the server it arrived. So in this case the answer to get more speed is to increase RF. But what's then the point with adding nodes into the ring? Disk speed! This test does reads to keys which doesn't exists, thus simulating a case where all requests hit the memory cache (cassandra or OS disk cache). Increasing node count will add more disks, so the requests are distributed into larger array of disks thus increasing the overall available IOPS throughput. So if all this was correct, we should stop making benchmarks which doesn't hit the disks, because it doesn't represent the real work scenario. I didn't have time to run the test with three servers, but I'll do it later anyway to see what kind of results it will produce. Also doing the test with RF=2 should confirm that we can increase the cluster throughput by increasing the RF count even if the requests don't hit the disks. - Juho Mäkinen On Tue, Jul 20, 2010 at 12:58 AM, Dave Viner <davevi...@pobox.com> wrote: > I've put up a bunch of steps to get Cassandra installed on an EC2 instance: > http://wiki.apache.org/cassandra/CloudConfig > Look at the "step-by-step guide". > I haven't AMI-ed the result, since the steps are fairly quick and it would > be just one more thing to update with a new release of Cassandra... > Dave Viner > > On Mon, Jul 19, 2010 at 2:35 PM, Peter Schuller > <peter.schul...@infidyne.com> wrote: >> >> > CPU was approximately equal across the cluster; it was around 50%. >> > >> > stress.py generates keys randomly or using a gaussian distribution, both >> > methods showed the same results. >> > >> > Finally, we're using a random partitioner, so Cassandra will hash the >> > keys using md5 to map it to a position on the ring. >> >> Ok, weird. FWIW I'll try to re-produce too on EC2 (I don't really have >> an opportunity to test this on real hardware atm), but no promises on >> when since I haven't prepared cassandra boostrapping for EC2. >> >> -- >> / Peter Schuller > >