On Mon, Mar 14, 2011 at 8:17 PM, Karl Hiramoto wrote:
> On 03/14/11 15:33, Sylvain Lebresne wrote:
>>
>> CASSANDRA-1537 is probably also a partial but possibly sufficient
>> solution. That's also probably easier than CASSANDRA-1610 and I'll try
>> to give it a shot asap, that had been on my todo l
On 03/14/11 15:33, Sylvain Lebresne wrote:
>
> CASSANDRA-1537 is probably also a partial but possibly sufficient
> solution. That's also probably easier than CASSANDRA-1610 and I'll try
> to give it a shot asap, that had been on my todo list way too long.
>
Thanks, eager to see CASSANDRA-1610 somed
On Sun, Mar 13, 2011 at 7:10 PM, Karl Hiramoto wrote:
>
> Hi,
>
> I'm looking for advice on reducing disk usage. I've ran out of disk space
> two days in a row while running a nightly scheduled nodetool repair &&
> nodetool compact cronjob.
>
> I have 6 nodes RF=3 each with 300 GB drives at
On 3/13/2011 9:27 PM, aaron morton wrote:
The CF Stats are reporting you have 70GB total space taken up by
SSTables, of which 55GB is live. The rest is available for deletion,
AFAIK this happens when cassandra detects free space is running low.
I've never dug into how/when this happens though.
The CF Stats are reporting you have 70GB total space taken up by SSTables, of
which 55GB is live. The rest is available for deletion, AFAIK this happens when
cassandra detects free space is running low. I've never dug into how/when this
happens though.
With that amount of data it seems odd to
Hi,
I'm looking for advice on reducing disk usage. I've ran out of disk
space two days in a row while running a nightly scheduled nodetool
repair && nodetool compact cronjob.
I have 6 nodes RF=3 each with 300 GB drives at a hosting company.
GCGraceSeconds= 26 (3.1 days)
Every colu