Thanks a lot, Chris.

Will try it today/tomorrow and update here.

Thanks,
Kunal

On 7 March 2018 at 00:25, Chris Lohfink <clohf...@apple.com> wrote:

> While its off you can delete the files in the directory yeah
>
> Chris
>
>
> On Mar 6, 2018, at 2:35 AM, Kunal Gangakhedkar <kgangakhed...@gmail.com>
> wrote:
>
> Hi Chris,
>
> I checked for snapshots and backups - none found.
> Also, we're not using opscenter, hadoop or spark or any such tool.
>
> So, do you think we can just remove the cf and restart the service?
>
> Thanks,
> Kunal
>
> On 5 March 2018 at 21:52, Chris Lohfink <clohf...@apple.com> wrote:
>
>> Any chance space used by snapshots? What files exist there that are
>> taking up space?
>>
>> > On Mar 5, 2018, at 1:02 AM, Kunal Gangakhedkar <kgangakhed...@gmail.com>
>> wrote:
>> >
>> > Hi all,
>> >
>> > I have a 2-node cluster running cassandra 2.1.18.
>> > One of the nodes has run out of disk space and died - almost all of it
>> shows up as occupied by size_estimates CF.
>> > Out of 296GiB, 288GiB shows up as consumed by size_estimates in 'du
>> -sh' output.
>> >
>> > This is while the other node is chugging along - shows only 25MiB
>> consumed by size_estimates (du -sh output).
>> >
>> > Any idea why this descripancy?
>> > Is it safe to remove the size_estimates sstables from the affected node
>> and restart the service?
>> >
>> > Thanks,
>> > Kunal
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>>
>
>

Reply via email to