Your cluster is probably having issues with compactions (with STCS you
should never have this many).  I would probably punt with
OpsCenter/rollups60. Turn the node off and move all of the sstables off to
a different directory for backup (or just rm if you really don't care about
1 minute metrics), than turn the server back on.

Once you get your cluster running again go back and investigate why
compactions stopped, my guess is you hit an exception in past that killed
your CompactionExecutor and things just built up slowly until you got to
this point.

Chris

On Tue, Feb 10, 2015 at 2:15 PM, Paul Nickerson <pgn...@gmail.com> wrote:

> Thank you Rob. I tried a 12 GiB heap size, and still crashed out. There
> are 1,617,289 files under OpsCenter/rollups60.
>
> Once I downgraded Cassandra to 2.1.1 (apt-get install cassandra=2.1.1), I
> was able to start up Cassandra OK with the default heap size formula.
>
> Now my cluster is running multiple versions of Cassandra. I think I will
> downgrade the rest to 2.1.1.
>
>  ~ Paul Nickerson
>
> On Tue, Feb 10, 2015 at 2:05 PM, Robert Coli <rc...@eventbrite.com> wrote:
>
>> On Tue, Feb 10, 2015 at 11:02 AM, Paul Nickerson <pgn...@gmail.com>
>> wrote:
>>
>>> I am getting an out of memory error why I try to start Cassandra on one
>>> of my nodes. Cassandra will run for a minute, and then exit without
>>> outputting any error in the log file. It is happening while SSTableReader
>>> is opening a couple hundred thousand things.
>>>
>> ...
>>
>>> Does anyone know how I might get Cassandra on this node running again?
>>> I'm not very familiar with correctly tuning Java memory parameters, and I'm
>>> not sure if that's the right solution in this case anyway.
>>>
>>
>> Try running 2.1.1, and/or increasing heap size beyond 8gb.
>>
>> Are there actually that many SSTables on disk?
>>
>> =Rob
>>
>>
>
>

Reply via email to