Hi Victor,

As Andrey said, running cleanup doesn't work as you expect.

> The reason I need to clean things is that I wont need most of my inserted data on the next day.

Deleted objects(columns/records) become deletable from sstable file when they get expired(after gc_grace_seconds).

Such deletable objects are actually gotten rid of by compaction.

The tricky part is that a deletable object remains unless all of its old objects(the same row key) are contained in the set of sstable files involved in the compaction.

- Takenori

(2013/05/29 3:01), Andrey Ilinykh wrote:
cleanup removes data which doesn't belong to the current node. You have to run it only if you move (or add new) nodes. In your case there is no any reason to do it.


On Tue, May 28, 2013 at 7:39 AM, Víctor Hugo Oliveira Molinar <vhmoli...@gmail.com <mailto:vhmoli...@gmail.com>> wrote:

    Hello everyone.
    I have a daily maintenance task at c* which does:

    -truncate cfs
    -clearsnapshots
    -repair
    -cleanup

    The reason I need to clean things is that I wont need most of my
    inserted data on the next day. It's kind a business requirement.

    Well,  the problem I'm running to, is the misunderstanding about
    cleanup operation.
    I have 2 nodes with lower than half usage of disk, which is
    moreless 13GB;

    But, the last few days, arbitrarily each node have reported me a
    cleanup error indicating that the disk was full. Which is not true.

    /Error occured during cleanup/
    /java.util.concurrent.ExecutionException: java.io.IOException:
    disk full/


    So I'd like to know more about what does happens in a cleanup
    operation.
    Appreciate any help.



Reply via email to