What's the 'ulimit -a' output of the user cassandra runs as? From this and
your previous OOM thread, is sounds like you skipped the requisite OS
configuration.
On Wed, Sep 17, 2014 at 9:43 AM, Yatong Zhang wrote:
> Hi there,
>
> I am using leveled compaction strategy and have many sstable files
Check out that the limits here are set correctly:
http://www.datastax.com/documentation/cassandra/2.0/cassandra/install/installRecommendSettings.html
particularly the:
* mmap limit which is really what this looks like...
* nproc limit which on some distros defaults to 1024 can be your issue
(ma
My sstable size is 192MB. I removed some data directories to reduce the
data that need to load, and this time it worked, so I was sure this was
because of the data was too large.
I tried to tune the JVM parameters, like heap size or stack size, but
didn't help. I finally got it resolved by add some
What is your sstable size set to for each of the sstables, using LCS? Are you
at the default of 5 MB?
Rahul Neelakantan
> On Sep 17, 2014, at 10:58 AM, Yatong Zhang wrote:
>
> sorry, about 300k+
>
>> On Wed, Sep 17, 2014 at 10:56 PM, Yatong Zhang wrote:
>> no, I am running 64 bit JVM。 But I
sorry, about 300k+
On Wed, Sep 17, 2014 at 10:56 PM, Yatong Zhang wrote:
> no, I am running 64 bit JVM。 But I have many sstable files, about 30k+
>
> On Wed, Sep 17, 2014 at 10:50 PM, graham sanderson
> wrote:
>
>> Are you running on a 32 bit JVM?
>>
>> On Sep 17, 2014, at 9:43 AM, Yatong Zhang
no, I am running 64 bit JVM。 But I have many sstable files, about 30k+
On Wed, Sep 17, 2014 at 10:50 PM, graham sanderson wrote:
> Are you running on a 32 bit JVM?
>
> On Sep 17, 2014, at 9:43 AM, Yatong Zhang wrote:
>
> Hi there,
>
> I am using leveled compaction strategy and have many sstable
Are you running on a 32 bit JVM?
On Sep 17, 2014, at 9:43 AM, Yatong Zhang wrote:
> Hi there,
>
> I am using leveled compaction strategy and have many sstable files. The error
> was during the startup, so any idea about this?
>
> ERROR [FlushWriter:4] 2014-09-17 22:36:59,383 CassandraDaemon.
Hi there,
I am using leveled compaction strategy and have many sstable files. The
error was during the startup, so any idea about this?
> ERROR [FlushWriter:4] 2014-09-17 22:36:59,383 CassandraDaemon.java (line
> 199) Exception in thread Thread[FlushWriter:4,5,main]
> java.lang.OutOfMemoryError: