Yes, I'm running with defaults settings otherwise. For cache sizes, I've tried '0' for non-cached, '1' for full cached and a fixed value of 500000, for KeysCached, RowsCached was using default everytime. So I don't think the problem is about the cache. Concurrent read was 32, write was 64 I also tried 320 and 640
The read/write ratio is about 2/1 How much memory will it need to do a compaction? another 2 nodes went down last night. They were doing a compaction before they went down, judging from the timestamp of the *tmp* files in the data folder. Stack trace for node 1 INFO [GC inspection] 2010-07-23 04:13:24,517 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 31275 ms, 29578704 reclaimed leaving 10713006792 used; max is 10873667584 ERROR [MESSAGE-DESERIALIZER-POOL:1] 2010-07-23 04:14:30,656 DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222) at java.util.concurrent.FutureTask.get(FutureTask.java:83) at org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:888) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) Caused by: java.lang.OutOfMemoryError: Java heap space at org.apache.cassandra.net.MessageSerializer.deserialize(Message.java:138) at org.apache.cassandra.net.MessageDeserializationTask.run(MessageDeserializationTask.java:45) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) ... 2 more Stack trace for node 2 INFO [COMMIT-LOG-WRITER] 2010-07-23 01:41:06,550 CommitLogSegment.java (line 50) Creating new commitlog segment /opt/crawler/cassandra/sysdata/commitlog/CommitLog-1279820466550.log INFO [Timer-1] 2010-07-23 01:41:09,027 Gossiper.java (line 179) InetAddress /183.62.134.31 is now dead. INFO [ROW-MUTATION-STAGE:45] 2010-07-23 01:41:09,279 ColumnFamilyStore.java (line 357) source_page has reached its threshold; switching in a fresh Memtable at CommitLogContext(file='/opt/crawler/cassandra/sysdata/commitlog/CommitLog-1279820466550.log', position=9413) INFO [ROW-MUTATION-STAGE:45] 2010-07-23 01:41:09,322 ColumnFamilyStore.java (line 609) Enqueuing flush of Memtable(source_page)@1343553539 INFO [FLUSH-WRITER-POOL:1] 2010-07-23 01:41:09,323 Memtable.java (line 148) Writing Memtable(source_page)@1343553539 INFO [GMFD:1] 2010-07-23 01:41:09,349 Gossiper.java (line 568) InetAddress /183.62.134.30 is now UP INFO [GMFD:1] 2010-07-23 01:41:09,349 Gossiper.java (line 568) InetAddress /183.62.134.31 is now UP INFO [GMFD:1] 2010-07-23 01:41:09,350 Gossiper.java (line 568) InetAddress /183.62.134.28 is now UP INFO [GMFD:1] 2010-07-23 01:41:09,350 Gossiper.java (line 568) InetAddress /183.62.134.26 is now UP INFO [GMFD:1] 2010-07-23 01:41:09,350 Gossiper.java (line 568) InetAddress /183.62.134.27 is now UP INFO [GMFD:1] 2010-07-23 01:41:09,350 Gossiper.java (line 568) InetAddress /183.62.134.24 is now UP INFO [GMFD:1] 2010-07-23 01:41:09,350 Gossiper.java (line 568) InetAddress /183.62.134.25 is now UP INFO [GMFD:1] 2010-07-23 01:41:09,350 Gossiper.java (line 568) InetAddress /183.62.134.22 is now UP INFO [GMFD:1] 2010-07-23 01:41:09,350 Gossiper.java (line 568) InetAddress /183.62.134.23 is now UP INFO [GMFD:1] 2010-07-23 01:41:09,351 Gossiper.java (line 568) InetAddress /183.62.134.33 is now UP INFO [GMFD:1] 2010-07-23 01:41:09,351 Gossiper.java (line 568) InetAddress /183.62.134.32 is now UP INFO [GMFD:1] 2010-07-23 01:41:09,351 Gossiper.java (line 568) InetAddress /183.62.134.34 is now UP INFO [GC inspection] 2010-07-23 01:41:24,192 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 12908 ms, 413977296 reclaimed leaving 9524655928 used; max is 10873667584 INFO [Timer-1] 2010-07-23 01:41:50,867 Gossiper.java (line 179) InetAddress /183.62.134.34 is now dead. INFO [Timer-1] 2010-07-23 01:41:50,871 Gossiper.java (line 179) InetAddress /183.62.134.33 is now dead. INFO [Timer-1] 2010-07-23 01:41:50,871 Gossiper.java (line 179) InetAddress /183.62.134.32 is now dead. INFO [Timer-1] 2010-07-23 01:41:50,871 Gossiper.java (line 179) InetAddress /183.62.134.31 is now dead. INFO [Timer-1] 2010-07-23 01:41:50,872 Gossiper.java (line 179) InetAddress /183.62.134.30 is now dead. INFO [Timer-1] 2010-07-23 01:41:50,872 Gossiper.java (line 179) InetAddress /183.62.134.28 is now dead. INFO [Timer-1] 2010-07-23 01:41:50,872 Gossiper.java (line 179) InetAddress /183.62.134.27 is now dead. INFO [Timer-1] 2010-07-23 01:41:50,872 Gossiper.java (line 179) InetAddress /183.62.134.26 is now dead. INFO [Timer-1] 2010-07-23 01:41:50,873 Gossiper.java (line 179) InetAddress /183.62.134.25 is now dead. INFO [Timer-1] 2010-07-23 01:41:50,873 Gossiper.java (line 179) InetAddress /183.62.134.24 is now dead. INFO [Timer-1] 2010-07-23 01:41:50,873 Gossiper.java (line 179) InetAddress /183.62.134.23 is now dead. INFO [Timer-1] 2010-07-23 01:41:50,873 Gossiper.java (line 179) InetAddress /183.62.134.22 is now dead. INFO [GC inspection] 2010-07-23 01:41:50,875 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 11964 ms, 226808 reclaimed leaving 10303521344 used; max is 10873667584 ERROR [Thread-21] 2010-07-23 01:41:50,890 CassandraDaemon.java (line 78) Fatal exception in thread Thread[Thread-21,5,main] java.lang.OutOfMemoryError: Java heap space at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:71) 2010-07-23 发件人: Peter Schuller 发送时间: 2010-07-21 14:35:36 收件人: user 抄送: 主题: Re: Re: What is consuming the heap? > So the bloom filters reside in memory completely? Yes. The point of bloom filters in cassandra is to act as a fast way to determine whether sstables need to be consulted. This check involves random access into the bloom filter. It needs to be in memory for this to be effective. But due to the nature of bloom filters you don't need a lot of memory per key in the database, so it scales pretty well. > I count the total size of *-Filter.db files in my keyspace, it's > 436,747,815bytes. > > I guess this means it won't consume a major part of 10g heap space Right, doesn't sound like bloom filters are the cause. Are you running with defaults settings otherwise - cache sizes, flush thresholds, etc? -- / Peter Schuller