Understood. Thanks Edward!
On Sat, Dec 3, 2011 at 6:35 AM, Edward Capriolo wrote:
> There is no way to set a max size on an sstable file. If your Cassandra
> data directory is not your / filesystem you could reformat it as ext4 (or
> at least ext3 with better options)
>
>
> On Fri, Dec 2, 2011 at
There is no way to set a max size on an sstable file. If your Cassandra
data directory is not your / filesystem you could reformat it as ext4 (or
at least ext3 with better options)
On Fri, Dec 2, 2011 at 8:35 AM, Alexandru Dan Sicoe <
sicoe.alexan...@googlemail.com> wrote:
> Ok, so my problem per
Ok, so my problem persisted. On the node that is filling up the harddisk, I
have a 230 GB disk. Right after I restart the node I it deletes tmp files
and reaches 55GB of data on disk. Then it start to quickly fill up the disk
- I see gigs added fast - it's not real data because other nodes don't ha
Yes, mostly sounds like it. In our case failed repairs were causing
accumulation of the tmp files.
Thanks,
Jahangir Mohammed.
On Thu, Dec 1, 2011 at 2:43 PM, Alexandru Dan Sicoe <
sicoe.alexan...@googlemail.com> wrote:
> Hi Jeremiah,
> My commitlog was indeed on another disk. I did what you sai
Hi Jeremiah,
My commitlog was indeed on another disk. I did what you said and yes the
node restart brings back the disk size to the around 50 GB I was expecting.
Still I do not understand how the node managed to get itself in the
situation of having these tmp files? Could you clarify what these ar
If you are writing data with QUORUM or ALL you should be safe to restart
cassandra on that node. If the extra space is all from *tmp* files from
compaction they will get deleted at startup. You will then need to run
repair on that node to get back any data that was missed while it was
full.