Thanks for the reply,

> > The first time my code used the 3.4 libraries with version level set
> > to 3.4 and it tried
> > to optimize() (still using this now deprecated old call), the new code
> went wild!
> > It took up more memory than the heap was limited to, so I believe it
> > is taking
> > up system resources.   We have turned off optimize for now.
> 
> Did it throw OutOfMemoryException? 

As you thought, there was No OOM Exception.

>I assume not - but I assume, you have seen more virtual
> memory usage in "top", but that's not really related to optimize/forceMerge. 

I'm actually on a large Windows server box.

It is only related to optimize() in that all memory is used _only_ when the 
server goes for an optimize().
Running through our web interface (which reuses its one IndexReader) doesn't 
seem to drive up the memory (beyond use of the heap).

The problem is that when we run the optimize, the machine gets memory limited 
and everything then runs slow (nearly preventing us from viewing what going on) 
and the optimize() takes "forever" (=an extra 1.5 hrs).  We are not trying to 
share the machine, but it is a bit excessive.

Further reading suggests that maybe the right approach is to wait for 
IndexWriter to insert cleanup as the need arises.  Does that sound right?

Is my read that the system will prevent excessive segments and cleanup deleted 
documents on its own a reasonable assessment of state of Lucene in 3.4 or 4.0?  
Thus I don't really need to call optimize()/maybeMerge().
If I skip optimize()/ maybeMerge() am I missing anything I should be doing to 
keep the index 'tidy'?  What I don't want is to find the index runs slow only 
after months of many document updates.

Any discussion by those with similar indexes would be welcome.

-Paul

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org

Reply via email to