I am curious about the side comment; "Depending on your usecase you may not
want to have a data density over 1.5 TB per node."

Why is that? I am planning much bigger than that, and now you give me
pause...


Cheers
Niclas

On Wed, Mar 7, 2018 at 6:59 PM, Rahul Singh <rahul.xavier.si...@gmail.com>
wrote:

> Are you putting both the commitlogs and the Sstables on the adds? Consider
> moving your snapshots often if that’s also taking up space. Maybe able to
> save some space before you add drives.
>
> You should be able to add these new drives and mount them without an
> issue. Try to avoid different number of data dirs across nodes. It makes
> automation of operational processes a little harder.
>
> As an aside, Depending on your usecase you may not want to have a data
> density over 1.5 TB per node.
>
> --
> Rahul Singh
> rahul.si...@anant.us
>
> Anant Corporation
>
> On Mar 7, 2018, 1:26 AM -0500, Eunsu Kim <eunsu.bil...@gmail.com>, wrote:
>
> Hello,
>
> I use 5 nodes to create a cluster of Cassandra. (SSD 1TB)
>
> I'm trying to mount an additional disk(SSD 1TB) on each node because each
> disk usage growth rate is higher than I expected. Then I will add the the
> directory to data_file_directories in cassanra.yaml
>
> Can I get advice from who have experienced this situation?
> If we go through the above steps one by one, will we be able to complete
> the upgrade without losing data?
> The replication strategy is SimpleStrategy, RF 2.
>
> Thank you in advance
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


-- 
Niclas Hedhman, Software Developer
http://zest.apache.org - New Energy for Java

Reply via email to