In the past in such scenarios it has helped us to check the partition where 
cassandra is installed and allocate more space for the partition. Maybe it is a 
disk space issue but it is good to check if it is related to the space 
allocation for the partition issue. My 2 cents.

Sent from my iPhone

> On 01-Oct-2014, at 11:53 am, Dominic Letz <dominicl...@exosite.com> wrote:
> 
> This is a shot into the dark but you could check whether you have too many 
> snapshots laying around that you actually don't need. You can get rid of 
> those with a quick "nodetool clearsnapshot".
> 
>> On Wed, Oct 1, 2014 at 5:49 AM, cem <cayiro...@gmail.com> wrote:
>> Hi All,
>> 
>> I have a 7 node cluster. One node ran out of disk space and others are 
>> around 80% disk utilization. 
>> The data has 10 days TTL but I think compaction wasn't fast enough to clean 
>> up the expired data.  gc_grace value is set default. I have a replication 
>> factor of 3. Do you think that it may help if I delete all data for that 
>> node and run repair. Does node repair check the ttl value before retrieving 
>> data from other nodes? Do you have any other suggestions?
>> 
>> Best Regards,
>> Cem.
> 
> 
> 
> -- 
> Dominic Letz
> Director of R&D
> Exosite
> 

Reply via email to