ng for the compactions to complete, though.
I'll check again once compaction is done.
>
> -
> Aaron Morton
> Freelance Cassandra Consultant
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 10/05/2013, at 8:33 PM, Nicolai Gylli
> On Wed, May 8, 2013 at 10:43 PM, Nicolai Gylling wrote:
>> At the time of normal operation there was 800 gb free space on each node.
>> After the crash, C* started using a lot more, resulting in an
>> out-of-diskspace situation on 2 nodes, eg. C* used up the 800 gb in just
ae87f06356 rack1
DN 10.146.146.4 1.11 TB256 100.0%
85d4cd28-93f4-4b96-8140-3605302e90a9 rack1
--
Sincerely,
*Nicolai Gylling*
On Jan 17, 2013, at 11:54 AM, Sylvain Lebresne wrote:
> Now, one of the nodes dies, and when I bring it back up, it does'nt join the
> cluster again, but becomes it own node/cluster. I can't get it to join the
> cluster again, even after doing 'removenode' and clearing all data.
>
> That obvi
estions are:
1. Is this the correct way to boot/maintain a cluster?
2. Isn't the old node supposed to be removed from system.peers, when I do a
'removenode'?
3. Shouldn't the system.peers-table be alike on the 3 nodes, eg. A has B & C, B
has A & B and C has A & B ?