Feb 10, 2015 at 2:05 PM, Robert Coli wrote:
On Tue, Feb 10, 2015 at 11:02 AM, Paul Nickerson wrote:
I am getting an out of memory error why I try to start Cassandra on one of my
nodes. Cassandra will run for a minute, and then exit without outputting any
error in the log file. It
>>>
>>>>> Thank you Rob. I tried a 12 GiB heap size, and still crashed out.
>>>>> There are 1,617,289 files under OpsCenter/rollups60.
>>>>>
>>>>> Once I downgraded Cassandra to 2.1.1 (apt-get install
>>>>> cassandra
start up Cassandra OK with the default heap size formula.
>>>>
>>>> Now my cluster is running multiple versions of Cassandra. I think I
>>>> will downgrade the rest to 2.1.1.
>>>>
>>>> ~ Paul Nickerson
>>>>
>>>> On Tue, F
> Once I downgraded Cassandra to 2.1.1 (apt-get install cassandra=2.1.1),
>>> I was able to start up Cassandra OK with the default heap size formula.
>>>
>>> Now my cluster is running multiple versions of Cassandra. I think I will
>>> downgrade the rest to 2.1.1.
default heap size formula.
>>
>> Now my cluster is running multiple versions of Cassandra. I think I will
>> downgrade the rest to 2.1.1.
>>
>> ~ Paul Nickerson
>>
>> On Tue, Feb 10, 2015 at 2:05 PM, Robert Coli
>> wrote:
>>
>>> On Tue, F
lt heap size formula.
>
> Now my cluster is running multiple versions of Cassandra. I think I will
> downgrade the rest to 2.1.1.
>
> ~ Paul Nickerson
>
> On Tue, Feb 10, 2015 at 2:05 PM, Robert Coli wrote:
>
>> On Tue, Feb 10, 2015 at 11:02 AM, Paul Nickerson
>>
multiple versions of Cassandra. I think I will
downgrade the rest to 2.1.1.
~ Paul Nickerson
On Tue, Feb 10, 2015 at 2:05 PM, Robert Coli wrote:
> On Tue, Feb 10, 2015 at 11:02 AM, Paul Nickerson wrote:
>
>> I am getting an out of memory error why I try to start Cassandra on one
&g
On Tue, Feb 10, 2015 at 11:02 AM, Paul Nickerson wrote:
> I am getting an out of memory error why I try to start Cassandra on one of
> my nodes. Cassandra will run for a minute, and then exit without outputting
> any error in the log file. It is happening while SSTableReader is
I am getting an out of memory error why I try to start Cassandra on one of
my nodes. Cassandra will run for a minute, and then exit without outputting
any error in the log file. It is happening while SSTableReader is opening a
couple hundred thousand things.
I am running a 6 node cluster using
The version is 1.1.0
Prakrati Agrawal | Developer - Big Data(I&D)| 9731648376 | www.mu-sigma.com
From: Dave Brosius [mailto:dbros...@mebigfatguy.com]
Sent: Monday, June 11, 2012 10:07 AM
To: user@cassandra.apache.org
Subject: Re: Out of memory error
What version of Cassandra?
might be rel
...@thelastpickle.com]
*Sent:* Saturday, June 09, 2012 12:18 AM
*To:* user@cassandra.apache.org
*Subject:* Re: Out of memory error
When you ask a question please include the query or function call you
have made. An any other information that would help someone understand
what you are trying to do.
Also, please
Sorry
I ran list columnFamilyName; and it threw this error.
Thanks and Regards
Prakrati
From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: Saturday, June 09, 2012 12:18 AM
To: user@cassandra.apache.org
Subject: Re: Out of memory error
When you ask a question please include the query or
When you ask a question please include the query or function call you have
made. An any other information that would help someone understand what you are
trying to do.
Also, please list things you have already tried to work around the problem.
Cheers
-
Aaron Morton
Freelance
Check this slide,
http://www.slideshare.net/cloudera/hadoop-troubleshooting-101-kate-ting-cloudera
Regards
∞
Shashwat Shriparv
On Fri, Jun 8, 2012 at 2:34 PM, Prakrati Agrawal <
prakrati.agra...@mu-sigma.com> wrote:
> Dear all,
>
> ** **
>
> When I try to list the entire data in my colum
Dear all,
When I try to list the entire data in my column family I get the following
error:
Using default limit of 100
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at
org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:140)
.apache.cassandra.db.columniterator.SSTableNamesIterator.(SSTableNamesIterator.java:72)
>> > at
>> >
>> > org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:59)
>> > at
>> >
>> > org.apache.ca
> > at
> >
> org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:59)
> > at
> >
> org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
> > at
> >
> org.ap
.java:1311)
> at
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1203)
> at
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1131)
>
>
> Can someone please help debug this? The maximum heap size is 28G
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1203)
> at
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1131)
>
>
> Can someone please help debug this? The maximum heap size is 28G .
>
> I am not sure why cassandra is giving Out of memory error here.
>
> Thanks
> Anurag
>
--
It's always darkest just before you are eaten by a grue.
(ColumnFamilyStore.java:1131)
Can someone please help debug this? The maximum heap size is 28G .
I am not sure why cassandra is giving Out of memory error here.
Thanks
Anurag
20 matches
Mail list logo