Hey Adrian -
Why did you choose four big instances rather than more smaller ones?
Mostly to see the impact of additional CPUs on a write only load. The
portion of the application we're migrating from MySQL is very write
intensive. The other 8 core option was c1.xl with 7GB of RAM. I will
ve
Hi Alex,
This has been a useful thread, we've been comparing your numbers with
our own tests.
Why did you choose four big instances rather than more smaller ones?
For $8/hr you get four m2.4xl with a total of 8 disks.
For $8.16/hr you could have twelve m1.xl with a total of 48 disks, 3x
disk spa
On 5/9/11 9:49 PM, Jonathan Ellis wrote:
On Mon, May 9, 2011 at 5:58 PM, Alex Araujo> How many
replicas are you writing?
Replication factor is 3.
So you're actually spot on the predicted numbers: you're pushing
20k*3=60k "raw" rows/s across your 4 machines.
You might get another 10% or so fro
On Mon, May 9, 2011 at 5:58 PM, Alex Araujo > How many
replicas are you writing?
>
> Replication factor is 3.
So you're actually spot on the predicted numbers: you're pushing
20k*3=60k "raw" rows/s across your 4 machines.
You might get another 10% or so from increasing memtable thresholds,
but bo
On 5/6/11 9:47 PM, Jonathan Ellis wrote:
On Fri, May 6, 2011 at 5:13 PM, Alex Araujo
wrote:
I raised the default MAX_HEAP setting from the AMI to 12GB (~80% of
available memory).
This is going to make GC pauses larger for no good reason.
Good point - only doing writes at the moment. I will r
On Fri, May 6, 2011 at 5:13 PM, Alex Araujo
wrote:
> I raised the default MAX_HEAP setting from the AMI to 12GB (~80% of
> available memory).
This is going to make GC pauses larger for no good reason.
> raised
> concurrent_writes to 300 based on a (perhaps arbitrary?) recommendation in
> 'Cassan
Pardon the long delay - went on holiday and got sidetracked before I
could return to this project.
@Joaquin - The DataStax AMI uses a RAID0 configuration on an instance
store's ephemeral drives.
@Jonathan - you were correct about the client node being the
bottleneck. I setup 3 XL client ins
Did the images have EBS storage or Instance Store storage?
Typically EBS volumes aren't the best to be benchmarking against:
http://www.mail-archive.com/user@cassandra.apache.org/msg11022.html
Joaquin Casares
DataStax
Software Engineer/Support
On Wed, Apr 20, 2011 at 5:12 PM, Jonathan Ellis w
A few months ago I was seeing 12k writes/s on a single EC2 XL. So
something is wrong.
My first suspicion is that your client node may be the bottleneck.
On Wed, Apr 20, 2011 at 2:56 PM, Alex Araujo
wrote:
> Does anyone have any Ec2 benchmarks/experiences they can share? I am trying
> to get a s
Does anyone have any Ec2 benchmarks/experiences they can share? I am
trying to get a sense for what to expect from a production cluster on
Ec2 so that I can compare my application's performance against a sane
baseline. What I have done so far is:
1. Lunched a 4 node cluster of m1.xlarge inst
10 matches
Mail list logo