When import, all data in json file will load in memory. So that, you can not import large data. You need to export large sstable file to many small json files, and run import.
On Mon, Apr 5, 2010 at 5:26 PM, Jonathan Ellis <jbel...@gmail.com> wrote: > Usually sudden heap jumps involve compacting large rows. > > 0.6 (since beta3) includes a warning log when it finishes compacts a > row over 500MB by default, in the hopes that this will give you enough > time to fix things before whatever is making large rows makes one too > large to fit in memory. > > On Fri, Apr 2, 2010 at 4:57 PM, Weijun Li <weiju...@gmail.com> wrote: > > I'm running a test to write 30 million columns (700bytes each) to > Cassandra: > > the process ran smoothly for about 20mil then the heap usage suddenly > jumped > > from 2GB to 3GB which is the up limit of JVM, --from this point Cassandra > > will freeze for long time (terrible latency, no response to nodetool that > I > > have to stop the import client ) before it comes back to normal . It's a > > single node cluster with JVM maximum heap size of 3GB. So what could > cause > > this spike? What kind of tool can I use to find out what are the objects > > that are filling the additional 1GB heap? I did a heap dump but could get > > jhat to work to browse the dumped file. > > > > Thanks, > > > > -Weijun > > > -- Best regards, JKnight