If it's because of swapping made by Linux, wouldn't I only see the swap
memory consumption rise ? Because the problem is (apart from swap becoming
bigger and bigger) that cassandra ram memory consumption is going through
the roof.

However I want to give a try to the proposed method.

Thank you very much,
Best Regards,
Victor Kabdebon

PS : memory consumption :

root     19093  0.1 35.8 *1362108 722312* ?      Sl   Jan11  14:01
/usr/bin/java -ea -Xms128M -Xmx512M -XX:+UseParNewGC -XX:+UseConcMarkSweepGC
-XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1
-XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly
-XX:+HeapDumpOnOutOfMemoryError -Dcom.sun.management.jmxremote.port=8081
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dstorage-config=bin/../conf -Dcassandra-foreground=yes -cp
bin/../conf:bin/../build/classes:bin/../lib/antlr-3.1.3.jar:bin/../lib/apache-cassandra-0.6.6.jar:bin/../lib/avro-1.2.0-dev.jar:bin/../lib/cassandra-javautils.jar:bin/../lib/clhm-production.jar:bin/../lib/commons-cli-1.1.jar:bin/../lib/commons-codec-1.2.jar:bin/../lib/commons-collections-3.2.1.jar:bin/../lib/commons-io-1.4.jar:bin/../lib/commons-lang-2.4.jar:bin/../lib/commons-pool-1.5.4.jar:bin/../lib/google-collections-1.0.jar:bin/../lib/hadoop-core-0.20.1.jar:bin/../lib/hector-0.6.0-14.jar:bin/../lib/high-scale-lib.jar:bin/../lib/ivy-2.1.0.jar:bin/../lib/jackson-core-asl-1.4.0.jar:bin/../lib/jackson-mapper-asl-1.4.0.jar:bin/../lib/jline-0.9.94.jar:bin/../lib/json-simple-1.1.jar:bin/../lib/libthrift-r917130.jar:bin/../lib/log4j-1.2.14.jar:bin/../lib/perf4j-0.9.12.jar:bin/../lib/slf4j-api-1.5.8.jar:bin/../lib/slf4j-log4j12-1.5.8.jar:bin/../lib/uuid-3.1.jar
org.apache.cassandra.thrift.CassandraDaemon


2011/1/16 Aaron Morton <aa...@thelastpickle.com>

> The OS will make it's best guess as to how much memory if can give over to
> mmapped files. Unfortunately it will not always makes the best decision, see
> the information on adding JNA and mlockall() support in cassandra 0.6.5
> http://www.datastax.com/blog/whats-new-cassandra-065
>
> <http://www.datastax.com/blog/whats-new-cassandra-065>As Jonathan says,
> try setting the disk mode to standard to see the difference.
>
> WRT the resident memory for the process, not all memory allocation is done
> on the heap. To see the non heap usage connect to the processing using
> JConsole and take a look at the Memory tab. For example on my box now
> Cassandra has 110M of heap memory and 20M of non heap. AFAIK memory such as
> the class definitions are not included in the heap memory usage.
>
> Hope that helps.
> Aaron
>
>
> On 15 Jan, 2011,at 08:03 PM, Victor Kabdebon <victor.kabde...@gmail.com>
> wrote:
>
> Hi Jonathan, hi Edward,
>
> Jonathan : but it looks like mmaping wants to consume the entire memory of
> my server. It goes up to 1.7 Gb for a ridiculously small amount of data.
> Am I doing something wrong or is there something I should change to prevent
> this never ending increase of memory consumption ?
> Edward : I am not sure, I will try to see that tomorrow but my disk access
> mode is standard, not mmap.
>
> Anyway thank you very much,
> Victor K.
>
> PS : here is some hours after the result of ps aux | grep cassandra
> root     19093  0.1 30.0 1243940 *605060* ?      Sl   Jan11  10:15
> /usr/bin/java -ea -Xms128M *-Xmx512M* -XX:+UseParNewGC
> -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8
> -XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75
> -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError
> -Dcom.sun.management.jmxremote.port=8081
> -Dcom.sun.management.jmxremote.ssl=false
> -Dcom.sun.management.jmxremote.authenticate=false
> -Dstorage-config=bin/../conf -Dcassandra-foreground=yes -cp
> bin/../conf:bin/../build/classes:bin/../lib/antlr-3.1.3.jar:bin/../lib/apache-cassandra-0.6.6.jar:bin/../lib/avro-1.2.0-dev.jar:bin/../lib/cassandra-javautils.jar:bin/../lib/clhm-production.jar:bin/../lib/commons-cli-1.1.jar:bin/../lib/commons-codec-1.2.jar:bin/../lib/commons-collections-3.2.1.jar:bin/../lib/commons-io-1.4.jar:bin/../lib/commons-lang-2.4.jar:bin/../lib/commons-pool-1.5.4.jar:bin/../lib/google-collections-1.0.jar:bin/../lib/hadoop-core-0.20.1.jar:bin/../lib/hector-0.6.0-14.jar:bin/../lib/high-scale-lib.jar:bin/../lib/ivy-2.1.0.jar:bin/../lib/jackson-core-asl-1.4.0.jar:bin/../lib/jackson-mapper-asl-1.4.0.jar:bin/../lib/jline-0.9.94.jar:bin/../lib/json-simple-1.1.jar:bin/../lib/libthrift-r917130.jar:bin/../lib/log4j-1.2.14.jar:bin/../lib/perf4j-0.9.12.jar:bin/../lib/slf4j-api-1.5.8.jar:bin/../lib/slf4j-log4j12-1.5.8.jar:bin/../lib/uuid-3.1.jar
> org.apache.cassandra.thrift.CassandraDaemon
>
>
> 2011/1/15 Jonathan Ellis <jbel...@gmail.com>
>
>> mmapping only consumes memory that the OS can afford to feed it.
>>
>>
>> On Fri, Jan 14, 2011 at 7:29 PM, Edward Capriolo <edlinuxg...@gmail.com>
>> wrote:
>> > On Fri, Jan 14, 2011 at 2:13 PM, Victor Kabdebon
>> > <victor.kabde...@gmail.com> wrote:
>> >> Dear rajat,
>> >>
>> >> Yes it is possible, I have the same constraints. However I must warn
>> you,
>> >> from what I see Cassandra memory consumption is not bounded in 0.6.X on
>> >> debian 64 Bit
>> >>
>> >> Here is an example of an instance launch in a node :
>> >>
>> >> root     19093  0.1 28.3 1210696 570052 ?      Sl   Jan11   9:08
>> >> /usr/bin/java -ea -Xms128M -Xmx512M -XX:+UseParNewGC
>> -XX:+UseConcMarkSweepGC
>> >> -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8
>> -XX:MaxTenuringThreshold=1
>> >> -XX:CMSInitiatingOccupancyFraction=75
>> -XX:+UseCMSInitiatingOccupancyOnly
>> >> -XX:+HeapDumpOnOutOfMemoryError
>> -Dcom.sun.management.jmxremote.port=8081
>> >> -Dcom.sun.management.jmxremote.ssl=false
>> >> -Dcom.sun.management.jmxremote.authenticate=false
>> >> -Dstorage-config=bin/../conf -Dcassandra-foreground=yes -cp
>> >>
>> bin/../conf:bin/../build/classes:bin/../lib/antlr-3.1.3.jar:bin/../lib/apache-cassandra-0.6.6.jar:bin/../lib/avro-1.2.0-dev.jar:bin/../lib/cassandra-javautils.jar:bin/../lib/clhm-production.jar:bin/../lib/commons-cli-1.1.jar:bin/../lib/commons-codec-1.2.jar:bin/../lib/commons-collections-3.2.1.jar:bin/../lib/commons-io-1.4.jar:bin/../lib/commons-lang-2.4.jar:bin/../lib/commons-pool-1.5.4.jar:bin/../lib/google-collections-1.0.jar:bin/../lib/hadoop-core-0.20.1.jar:bin/../lib/hector-0.6.0-14.jar:bin/../lib/high-scale-lib.jar:bin/../lib/ivy-2.1.0.jar:bin/../lib/jackson-core-asl-1.4.0.jar:bin/../lib/jackson-mapper-asl-1.4.0.jar:bin/../lib/jline-0.9.94.jar:bin/../lib/json-simple-1.1.jar:bin/../lib/libthrift-r917130.jar:bin/../lib/log4j-1.2.14.jar:bin/../lib/perf4j-0.9.12.jar:bin/../lib/slf4j-api-1.5.8.jar:bin/../lib/slf4j-log4j12-1.5.8.jar:bin/./lib/uuid-3.1.jar
>> >> org.apache.cassandra.thrift.CassandraDaemon
>> >>
>> >> Look at the second bold value, Xmx indicates the maximum memory that
>> >> cassandra can use; it is set to be 512, so it could easily fit into 1
>> Gb.
>> >> Now look at the first one, 570Mb > 512 Mb. Moreover if I come back in
>> one
>> >> day the first value will be even higher. Probably around 610 Mb.
>> Actually it
>> >> increases to the point where I need to restart it otherwise other
>> program
>> >> are shut down by Linux for cassandra to further expand its memory
>> usage...
>> >>
>> >> By the way it's a call to other cassandra users, am I the only one to
>> >> encounter this problem ?
>> >>
>> >> Best regards,
>> >>
>> >> Victor K.
>> >>
>> >> 2011/1/14 Rajat Chopra <rcho...@makara.com>
>> >>>
>> >>> Hello.
>> >>>
>> >>>
>> >>>
>> >>> According to  JVM heap size topic at
>> >>> http://wiki.apache.org/cassandra/MemtableThresholds , Cassandra would
>> need
>> >>> atleast 1G of memory to run. Is it possible to have a running
>> Cassandra
>> >>> cluster with machines that have less than that memory… say 512M?
>> >>>
>> >>> I can live with slow transactions, no compactions etc, but do not want
>> an
>> >>> OutOfMemory error. The reason for a smaller bound for Cassandra is
>> that I
>> >>> want to leave room for other processes to run.
>> >>>
>> >>>
>> >>>
>> >>> Please help with specific parameters to tune.
>> >>>
>> >>>
>> >>>
>> >>> Thanks,
>> >>>
>> >>> Rajat
>> >>>
>> >>>
>> >>
>> >
>> > -Xmx512M is not an overall memory limit. MMAP'ed files also consume
>> > memory. Try turning disk access mode to standard not (MMAP or
>> > MMAP_INDEX_ONLY).
>> >
>>
>>
>>
>> --
>> Jonathan Ellis
>> Project Chair, Apache Cassandra
>> co-founder of Riptano, the source for professional Cassandra support
>> http://riptano.com
>>
>
>

Reply via email to