Thanks everyone, for the pointers. I've found an opportunity to simplify
the setup, still 2 DCs and 3 rack setup (RF = 1 for DC with 1 rack, and RF
= 2 for DC with 2 racks), but now each rack contains 9 nodes with even
token distribution.
Once I got the new topology in place, I ran multiple repai
> How come a node would consume 5x its normal data size during the repair
> process?
https://issues.apache.org/jira/browse/CASSANDRA-2699
It's likely a variation based on how out of synch you happen to be,
and whether you have a neighbor that's also been repaired and bloated
up already.
> My set
...@thelastpickle.com>>
Reply-To: mailto:user@cassandra.apache.org>>
Date: Fri, 17 Aug 2012 20:40:54 +1200
To: mailto:user@cassandra.apache.org>>
Subject: Re: nodetool repair uses insane amount of disk space
I would take a look at the replication: whats the RF per DC and what does
nodetool r
I would take a look at the replication: whats the RF per DC and what does
nodetool ring say. It's hard (as in no recommended) to get NTS with rack
allocation working correctly. Without know much more I would try to understand
what the topology is and if it can be simplified.
>> Additionally, t
Upgraded to 1.1.3 from 1.0.8 about 2 weeks ago.
On Thu, Aug 16, 2012 at 5:57 PM, aaron morton wrote:
> What version are using ? There were issues with repair using lots-o-space
> in 0.8.X, it's fixed in 1.X
>
> Cheers
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> htt
What version are using ? There were issues with repair using lots-o-space in
0.8.X, it's fixed in 1.X
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 17/08/2012, at 2:56 AM, Michael Morris wrote:
> Occasionally as I'm doing my regular ant
Occasionally as I'm doing my regular anti-entropy repair I end up with a
node that uses an exceptional amount of disk space (node should have about
5-6 GB of data on it, but ends up with 25+GB, and consumes the limited
amount of disk space I have available)
How come a node would consume 5x its nor