Hi, I tried with 100MB heap size and got the Error as well, it runs fine with 120MB.
Here is the histogram (application classes marked with --) Heap Histogram All Classes (excluding platform) Class Instance Count Total Size class [C 234200 30245722 class [B 1087565 25999145 class [[B 28430 4890060 class java.lang.String 232351 3717616 class org.apache.lucene.index.FreqProxTermsWriter$PostingList 99584 2788352 class java.util.HashMap$Entry 171031 2736496 class [Ljava.util.HashMap$Entry; 9563 2371256 class [Ljava.lang.Object; 31820 1820224 class --- 4474 1753808 class [I 4337 1567796 class java.lang.reflect.Method 19774 1364406 class org.apache.lucene.index.Term 117982 943856 class [Lorg.apache.lucene.index.RawPostingList; 12 770012 class --- 1837 490479 class org.apache.lucene.index.BufferedDeletes$Num 117303 469212 The --- as well was the reflect.Method are part of the app's data. Why is it, that creating a new Index Writer will let the indexing run fine with 80MB, but keeping it will create an OutOfMemoryException running with 100MB heap size ? Stefan -----Ursprüngliche Nachricht----- Von: Michael McCandless [mailto:luc...@mikemccandless.com] Gesendet: Mi 24.06.2009 11:52 An: java-user@lucene.apache.org Betreff: Re: OutOfMemoryError using IndexWriter Hmm -- I think your test env (80 MB heap, 50 MB used by app + 16 MB IndexWriter RAM buffer) is a bit too tight. The 16 MB buffer for IW is not a hard upper bound on how much RAM it may use. EG when merges are running, more RAM will be required, if a large doc brought it over the 16 MB limit it will consume more, etc. ~3 MB used by PostingList is reasonable. If after fixing the problem in your code, with a larger heap size you're still running out of RAM, then please post the full histogram from the resulting heap dump at which point the offender will be obvious. Or, can you make the problem happen with a smallish test case? Mike On Wed, Jun 24, 2009 at 5:37 AM, stefan<ste...@intermediate.de> wrote: > Hi, > > I do not set a RAM Buffer size, I assume default is 16MB. > My server runs with 80MB heap size, before starting lucene about 50MB is > used. In a production environment I run in this problem with heap size set to > 750MB with no other activity on the server (nighttime), though since then I > diagnosed some problem with my code as well. I just reproduced it with 80MB > but I guess I can reproduce it with 100MB heap as well, just takes longer. > > Here is the stack, I keep the dump for > java.lang.OutOfMemoryError: Java heap space > Dumping heap to c:\auto_heap_intern.prof ... > Heap dump file created [97173841 bytes in 3.534 secs] > ERROR lucene.SearchManager - Failure in index daemon: > java.lang.OutOfMemoryError: Java heap space > at java.util.HashSet.<init>(HashSet.java:86) > at > org.apache.lucene.index.DocumentsWriter.initFlushState(DocumentsWriter.java:540) > at > org.apache.lucene.index.DocumentsWriter.closeDocStore(DocumentsWriter.java:367) > at > org.apache.lucene.index.IndexWriter.flushDocStores(IndexWriter.java:1703) > at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3534) > at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3450) > at > org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:1638) > at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1602) > at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1578) > > Heap Histogram shows: > class org.apache.lucene.index.FreqProxTermsWriter$PostingList 116736 > (instances) 3268608 (size) > > Well, something I should do differently ? > > Stefan > > -----Ursprüngliche Nachricht----- > Von: Michael McCandless [mailto:luc...@mikemccandless.com] > Gesendet: Mi 24.06.2009 10:48 > An: java-user@lucene.apache.org > Betreff: Re: OutOfMemoryError using IndexWriter > > How large is the RAM buffer that you're giving IndexWriter? How large > a heap size do you give to JVM? > > Can you post one of the OOM exceptions you're hitting? > > Mike > > On Wed, Jun 24, 2009 at 4:08 AM, stefan<ste...@intermediate.de> wrote: >> Hi, >> >> I am using Lucene 2.4.1 to index a database with less than a million >> records. The resulting index is about 50MB in size. >> I keep getting an OutOfMemory Error if I re-use the same IndexWriter to >> index the complete database. This is though >> recommended in the performance hints. >> What I now do is, every 10000 Objects I close the index (and every 50 close >> actions optimize it) and create a new >> IndexWriter to continue. This process works fine, but to me seems hardly the >> recommended way to go. >> I've been using jhat/jmap as well as Netbeans profiler and am fairly sure >> that this is a problem related to Lucene. >> >> Any Ideas - or post this to Jira ? Jira has quite a few OutOfMemory postings >> but they all seem closed in Version 2.4.1. >> >> Thanks, >> >> Stefan >> >> --------------------------------------------------------------------- >> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org >> For additional commands, e-mail: java-user-h...@lucene.apache.org >> >> > > --------------------------------------------------------------------- > To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org > For additional commands, e-mail: java-user-h...@lucene.apache.org > > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org > For additional commands, e-mail: java-user-h...@lucene.apache.org > > --------------------------------------------------------------------- To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org For additional commands, e-mail: java-user-h...@lucene.apache.org
--------------------------------------------------------------------- To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org For additional commands, e-mail: java-user-h...@lucene.apache.org