mmit log).
>
>
>
> Parag
>
>
>
> *From:* DuyHai Doan [mailto:doanduy...@gmail.com]
> *Sent:* Thursday, April 10, 2014 3:35 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Cassandra memory consumption
>
>
>
> Data structures that are stored off hea
live? (keep in mind in this scenario the data has been been purged
to the data directory. It's only been added to the commit log).
Parag
From: DuyHai Doan [mailto:doanduy...@gmail.com]
Sent: Thursday, April 10, 2014 3:35 PM
To: user@cassandra.apache.org
Subject: Re: Cassandra memory consump
Data structures that are stored off heaps:
1) Row cache (if JNA enabled, otherwise on heap)
2) Bloom filter
3) Compression offset
4) Key Index sample
On heap:
1) Memtables
2) Partition Key cache
Hope that I did not forget anything
Regards
Duy Hai DOAN
On Thu, Apr 10, 2014 at 9:13 PM, Pa
On Wed, Feb 16, 2011 at 11:22 AM, Victor Kabdebon
wrote:
> Thanks robert, and do you know if there is a way to control the maximum
> likely number of memtables ? (I'd like to cap it at 2)
That "likely number of memtables" is the number of memtables which :
a) have been created
b) may or may not
Thanks robert, and do you know if there is a way to control the maximum
likely number of memtables ? (I'd like to cap it at 2)
2011/2/16 Robert Coli
> On Wed, Feb 16, 2011 at 7:12 AM, Victor Kabdebon
> wrote:
> > Someone please correct me if I am wrong, but I think the overhead you can
> > expe
On Wed, Feb 16, 2011 at 7:12 AM, Victor Kabdebon
wrote:
> Someone please correct me if I am wrong, but I think the overhead you can
> expect is something like :
>
MemTableThroughtPutInMB * *
JavaOverHeadFudgeFactor is "at least 2".
The maximum likely number of such memtables is usually roughl
Someone please correct me if I am wrong, but I think the overhead you can
expect is something like :
16* MemTableThroughtPutInMB
but I don't know when BinaryMemTableThroughputInMb come into account..
2011/2/16 ruslan usifov
>
>
> 2011/2/16 Victor Kabdebon
>
>
>>
>> Ruslan I have seen your que
2011/2/16 Victor Kabdebon
>
>
> Ruslan I have seen your question in the other mail and I have the same
> problem. How many CF do you have ?
>
>
> 16
Yes I didn't see there was 2 different parameters. I was personally setting
( in cassandra 0.6.6 ) MemTableThoughputInMB, but I don't know what
BinaryMemtableThroughtputInMB is.
And I take this opportunity to ask a question :
If you have a small amount of data per key so that your memtable is mayb
> Each of your 21 column families will have its own memtable if you have
> the default memtable settings your memory usage will grow quite large
> over time. Have you tuned down your memtable size?
>
Which config parameter make this? binary_memtable_throughput_in_mb?
Yes I have, but I have to add that this is a server where there is so little
data (2.0 Mo of text, rougly a book) that even if there were an overhead due
to those things it would be minimal.
I don't understand what's eating up all that memory, is it because of Linux
that has difficulty getting rid
On Tue, Feb 8, 2011 at 4:56 PM, Victor Kabdebon
wrote:
> I will do that in the future and I will post my results here ( I upgraded
> the server to debian 6 to see if there is any change, so memory is back to
> normal). I will report in a few days.
> In the meantime I am open to any suggestion...
>
I will do that in the future and I will post my results here ( I upgraded
the server to debian 6 to see if there is any change, so memory is back to
normal). I will report in a few days.
In the meantime I am open to any suggestion...
2011/2/8 Aaron Morton
> When you attach to the JVM with JConso
When you attach to the JVM with JConsole how much non heap memory and how much heap memory is reported on the memory tab?Xmx controls the total size of the heap memory, which excludes the permanent generation. seehttp://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html#generation_sizin
Information on the system :
*Debian 5*
*Jvm :*
victor@testhost:~/database/apache-cassandra-0.6.6$ java -version
java version "1.6.0_22"
Java(TM) SE Runtime Environment (build 1.6.0_22-b04)
Java HotSpot(TM) 64-Bit Server VM (build 17.1-b03, mixed mode)
*RAM :* 2Go
2011/2/8 Victor Kabdebon
> So
Sorry Jonathan :
So most of these informations were taken using the command :
sudo ps aux | grep cassandra
For the nodetool information it is :
/bin/nodetool --host localhost --port 8081 info
Regars,
Victor K.
2011/2/8 Jonathan Ellis
> I missed the part where you explained where you're g
Which jvm and version are you using?
-ryan
On Tue, Feb 8, 2011 at 7:32 AM, Victor Kabdebon
wrote:
> It is really weird that I am the only one to have this issue.
> I restarted Cassandra today and already the memory compution is over the
> limit :
>
> root 1739 4.0 24.5 664968 494996 pts/4
I missed the part where you explained where you're getting your numbers from.
On Tue, Feb 8, 2011 at 9:32 AM, Victor Kabdebon
wrote:
> It is really weird that I am the only one to have this issue.
> I restarted Cassandra today and already the memory compution is over the
> limit :
>
> root 1
It is really weird that I am the only one to have this issue.
I restarted Cassandra today and already the memory compution is over the
limit :
root 1739 4.0 24.5 664968 *494996* pts/4 SLl 15:51 0:12
/usr/bin/java -ea -Xms128M -Xmx256M -XX:+UseParNewGC -XX:+UseConcMarkSweepGC
-XX:+CMSPar
19 matches
Mail list logo