Hi Mike,
I ran a burn in test overnight with repeatedly indexing the same db in a loop.
I set the heap size to 120MB and called setMaxBufferedDeleteTerms( 1000), I did
not call commit and used the same index writer.
This test passed without any errors.
So to wrap this up - I shall call commit
Hi,
I'm afraid my test setup and code this is far too big.
What I use lucene for is fairly simple. I have a database with about 150
tables, I iterate all tables and create for each row a String representation
similar to a toString method containing all database data. This string is then
fed tog
Hi,
Here are the result of CheckIndex. I ran this just after I got the OOError.
OK [4 fields]
test: terms, freq, prox...OK [509534 terms; 9126904 terms/docs pairs;
4933036 tokens]
test: stored fields...OK [148124 total field count; avg 2 fields per
doc]
test: term vectors...
Hi Mike,
I just changed my test-code to run in an indefinite loop over the database to
index everything. Set the jvm to 120MB heap size, all other parameters as
before.
I got an OOError just as before - so I would say there is a leak somewhere.
Here is the histogram.
Heap Histogram
All Class
Hi,
>But a "leak" would keep leaking over time, right? Ie even a 1 GB heap
>on your test db should eventually throw OOME if there's really a leak.
No not necessarily, since I stop indexing ones everything is indexed - I shall
try repeated runs with 120MB.
>Are you calling updateDocument (which
to open.
Please post your results/views.
Sincerely,
Sithu
-Original Message-
From: stefan [mailto:ste...@intermediate.de]
Sent: Wednesday, June 24, 2009 10:08 AM
To: java-user@lucene.apache.org
Subject: AW: OutOfMemoryError using IndexWriter
Hi,
I do use Win32.
What do you mean by
Hi,
>OK so this means it's not a leak, and instead it's just that stuff is
>consuming more RAM than expected.
Or that my test db is smaller than the production db which is indeed the case.
>Hmm -- there are quite a few buffered deletes pending. It could be we
>are under-accounting for RAM used
Hi,
I do use Win32.
What do you mean by "the index file before
optimizations crosses your jvm memory usage settings (if say 512MB)" ?
Could you please further explain this ?
Stefan
-Ursprüngliche Nachricht-
Von: Sudarsan, Sithu D. [mailto:sithu.sudar...@fda.hhs.gov]
Gesendet: Mi 24.06
Hi,
there seems to be a little misunderstanding. The index will only be optimized
if the IndexWriter is to be closed and then only with a probability of 2%
(meaning occasionaly).
In other words, I only close the IndexWriter (and thus optimize) to avoid the
OOMError.
When I keep the same Index
Hi,
I tried with 100MB heap size and got the Error as well, it runs fine with 120MB.
Here is the histogram (application classes marked with --)
Heap Histogram
All Classes (excluding platform)
Class Instance Count Total Size
class [C234200 30245722
class [B1087565 25
Hi,
I do not set a RAM Buffer size, I assume default is 16MB.
My server runs with 80MB heap size, before starting lucene about 50MB is used.
In a production environment I run in this problem with heap size set to 750MB
with no other activity on the server (nighttime), though since then I diagnos
11 matches
Mail list logo