Hi Brian,
What compaction are you running? Have you tried using leveled compaction? AFAIK
it should generally require less disk space during compaction.
Cheers,
Jens
—
Sent from Mailbox
On Wed, Jun 18, 2014 at 6:02 PM, Brian Tarbox
wrote:
> I'm running on AWS m2.2xlarge instances using t
On Wed, Jun 18, 2014 at 9:10 AM, Brian Tarbox
wrote:
> We do a repair -pr on each node once a week on a rolling basis.
>
https://issues.apache.org/jira/browse/CASSANDRA-5850?focusedCommentId=14036057&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14036057
> Shoul
repair only creates snapshots if you use the “-snapshot” option.
On June 18, 2014 at 12:28:58 PM, Marcelo Elias Del Valle
(marc...@s1mbi0se.com.br) wrote:
AFAIK, when you run a repair a snapshot is created.
After the repair, I run "nodetool clearsnapshot" to save disk space.
Not sure it's you
AFAIK, when you run a repair a snapshot is created.
After the repair, I run "nodetool clearsnapshot" to save disk space.
Not sure it's you case or not.
[]s
2014-06-18 13:10 GMT-03:00 Brian Tarbox :
> We do a repair -pr on each node once a week on a rolling basis.
> Should we be running cleanup a
We do a repair -pr on each node once a week on a rolling basis.
Should we be running cleanup as well? My understanding that was only used
after adding/removing nodes?
We'd like to avoid adding nodes if possible (which might not be). Still
curious if we can get C* to do the maintenance task on a
One option is to add new nodes, and do a node repair/cleanup on everything.
That will at least reduce your per-node data size.
On Wed, Jun 18, 2014 at 11:01 AM, Brian Tarbox
wrote:
> I'm running on AWS m2.2xlarge instances using the ~800 gig
> ephemeral/attached disk for my data directory. My