https://issues.apache.org/jira/browse/CASSANDRA-856
On Tue, May 11, 2010 at 3:44 PM, Tobias Jungen <tobias.jun...@gmail.com> wrote: > Yet another BMT question, thought this may apply for regular memtables as > well... > > After doing a batch insert, I accidentally submitted the flush command > twice. To my surprise, the target node's log indicates that it wrote a new > *-Data.db file, and the disk usage went up accordingly. I tested and issued > the flush command a few more times, and after a few more data files I > eventually triggered a compaction, bringing the disk usage back down. The > data appears to continue to stick around in memory, however, as further > flush commands continue to result in new data files. > > Shouldn't flushing a memtable remove it from memory, or is expected behavior > that it sticks around until the node needs to reclaim the memory? Should I > worry about getting out-of-memory errors if I'm doing lots of inserts in > this manner? > -- Jonathan Ellis Project Chair, Apache Cassandra co-founder of Riptano, the source for professional Cassandra support http://riptano.com