Hi,

I agree with Alain, we have the same kind of problem here (4 DCs, ~1TB /
node) and we are replacing our big servers full of spinning drives with a
bigger number of smaller servers with SSDs (microservers are quite
efficient in terms of rack space and cost).

Kévin

On Tue, Sep 1, 2015 at 1:11 PM, Alain RODRIGUEZ <arodr...@gmail.com> wrote:

> Hi,
>
> Our migration to SSD (from m1.xl to I2.2xl on AWS) has been a big win. I
> mean we wen from 80 / 90 % disk utilisation to 20 % max. Basically,
> bottleneck are not disks performances anymore in our case. We get rid of
> one of our major issue that was disk contention.
>
> I highly recommend you to go ahead with this, even more with such a big
> data set. Yet it will probably be more expensive per node.
>
> An other solution for you might be adding nodes (to have less to handle
> per node and make maintenance operations like repair, bootstrap,
> decommission, ... faster)
>
> C*heers,
>
> Alain
>
>
>
>
> 2015-09-01 10:17 GMT+02:00 Sachin Nikam <skni...@gmail.com>:
>
>> We currently have a Cassandra Cluster spread over 2 DC. The data size on
>> each node of the cluster is 1.2TB with spinning disk. Minor and Major
>> compactions are slowing down our Read queries. It has been suggested that
>> replacing Spinning disks with SSD might help. Has anybody done something
>> similar? If so what has been the results?
>> Also if we go with SSD, how big can each node get for commercially
>> available SSDs?
>> Regards
>> Sachin
>>
>
>

Reply via email to