The new SSDs that we have (as well as Fusion IO) in theory can saturate the
gigabit ethernet port.

The 4k random read and write IOs they’re doing now can easily add up quick
and they’re faster than gigabit and even two gigabit.

However, not all of that 4k is actually used.  I suspect that on average
half is wasted.

But the question is how much.  Of course YMMV.

I’m thinking of getting our servers with a moderate amount of RAM.  Say
24GB.  Then allocate 8GB to Cassandra, another 8GB to random daemons we
run, then another 8GB to page cache.

Curious what other people have seen here in practice.  Are they getting
comparable performance to RAM in practice? Latencies would be higher of
course but we’re fine with that.

-- 

Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
… or check out my Google+ profile
<https://plus.google.com/102718274791889610666/posts>
<http://spinn3r.com>

Reply via email to