Hello Peter, So more information on that problem : Yes I am using this node with very few data, it is used to design requests so I don't need a very large dataset. I am running Apache Cassandra 0.6.6 on a Debian Stable, with java version "1.6.0_22".
I recently restarted cassandra, thus I have this low memory use, but if I keep it running for 2 or 3 weeks then Cassandra will take about 1.5 Gb. Here is the result of the command, one day after the previous one : vic...@***:~$ sudo ps aux | grep "cassandra" root 11034 0.2 26.8 1167304* 540176* ? Sl Dec17 8:09 /usr/bin/java -ea -Xms128M -Xmx512M -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Dcom.sun.management.jmxremote.port=8081 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dstorage-config=bin/../conf -Dcassandra-foreground=yes -cp bin/../conf:bin/../build/classes:bin/../lib/antlr-3.1.3.jar:bin/../lib/apache-cassandra-0.6.6.jar:bin/../lib/clhm-production.jar:bin/../lib/commons-cli-1.1.jar:bin/../lib/commons-codec-1.2.jar:bin/../lib/commons-collections-3.2.1.jar:bin/../lib/commons-lang-2.4.jar:bin/../lib/google-collections-1.0.jar:bin/../lib/hadoop-core-0.20.1.jar:bin/../lib/high-scale-lib.jar:bin/../lib/ivy-2.1.0.jar:bin/../lib/jackson-core-asl-1.4.0.jar:bin/../lib/jackson-mapper-asl-1.4.0.jar:bin/../lib/jline-0.9.94.jar:bin/../lib/json-simple-1.1.jar:bin/../lib/libthrift-r917130.jar:bin/../lib/log4j-1.2.14.jar:bin/../lib/slf4j-api-1.5.8.jar:bin/../lib/slf4j-log4j12-1.5.8.jar org.apache.cassandra.thrift.CassandraDaemon I have done very little work on it (a few insert and reads). Thank you, Victor 2010/12/19 Peter Schuller <peter.schul...@infidyne.com> > > vic...@****:~$ sudo ps aux | grep "cassandra" > > cassandra 11034 0.2 22.9 1107772 462764 ? Sl Dec17 6:13 > > /usr/bin/java -ea -Xms128M -Xmx512M -XX:+UseParNewGC > -XX:+UseConcMarkSweepGC > > -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8 > -XX:MaxTenuringThreshold=1 > > -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly > > -XX:+HeapDumpOnOutOfMemoryError -Dcom.sun.management.jmxremote.port=8081 > > -Dcom.sun.management.jmxremote.ssl=false > > -Dcom.sun.management.jmxremote.authenticate=false > > -Dstorage-config=bin/../conf -Dcassandra-foreground=yes -cp > > > bin/../conf:bin/../build/classes:bin/../lib/antlr-3.1.3.jar:bin/../lib/apache-cassandra-0.6.6.jar:bin/../lib/clhm-production.jar:bin/../lib/commons-cli-1.1.jar:bin/../lib/commons-codec-1.2.jar:bin/../lib/commons-collections-3.2.1.jar:bin/../lib/commons-lang-2.4.jar:bin/../lib/google-collections-1.0.jar:bin/../lib/hadoop-core-0.20.1.jar:bin/../lib/high-scale-lib.jar:bin/../lib/ivy-2.1.0.jar:bin/../lib/jackson-core-asl-1.4.0.jar:bin/../lib/jackson-mapper-asl-1.4.0.jar:bin/../lib/jline-0.9.94.jar:bin/../lib/json-simple-1.1.jar:bin/../lib/libthrift-r917130.jar:bin/../lib/log4j-1.2.14.jar:bin/../lib/slf4j-api-1.5.8.jar:bin/../lib/slf4j-log4j12-1.5.8.jar > > org.apache.cassandra.thrift.CassandraDaemon > > > > Cassandra uses 462764 Kb, roughly 460 Mb for 2 Mb of data... And it keeps > > getting bigger. > > It is important to know that I have just a few insert, quite a lot of > read > > though. Also Cassandra seams to completly ignore the JVM limitations such > as > > Xmx. > > If I don't stop and launch Cassandra every 15 ou 20 days it simply > crashes, > > due to oom errors. > > The resident size is not unexpected given that your Xmx is 512 MB. The > virtual may or may not be expected depending; for example thread > stacks as previously discussed in this thread. > > If you're not seeing the *resident* set size go above the maximum heap > size, you're unlikely to be seeing the same problem. > > WIth respect to OOM, see > http://www.riptano.com/docs/0.6/operations/tuning - but without more > information it's difficult to know what specifically it is that you're > hitting. Are you seriously saying you're running for 15-20 days with > only 2 mb of live data? > > -- > / Peter Schuller >