On Oct 18, 2009, at 1:47 PM, GlenAbbeyDrive wrote:
I commit the IndexWriter every 200 documents in a batch as follows
and you
can see that I reopened the reader after the commit.
private void commit(IndexWriter writer) throws CorruptIndexException {
writer.commit();
Super! Thanks for bringing closure.
Mike
On Sun, Oct 18, 2009 at 2:31 PM, GlenAbbeyDrive wrote:
>
> Using a 'realtime' reader off the IndexWriter (writer.getReader()) instead of
> indexReader.reopen() seems fixed this problem.
>
>
> Michael McCandless-2 wrote:
>>
>> Hmm, not good. Can you shar
Using a 'realtime' reader off the IndexWriter (writer.getReader()) instead of
indexReader.reopen() seems fixed this problem.
Michael McCandless-2 wrote:
>
> Hmm, not good. Can you share more details about how your app is using
> Lucene?
>
> Do you also have an IndexReader that's open on this
Hi Glen:
I think it is in your application code:
The indexReader returned is not closed if the underlying index has changed.
If your update rate is high, you will run into this issue because GC may not
have caught up with the FH leak.
THe code should instead be:
if (indexReader!=null){
I
I actually have seen the problem in both, with our app. Not sure if
it's how we are using lucene or if the problem is internal. We
actually only saw the problem recently because we starting using a new
linux machine that had it's open file handle settings too low. And
with our volume it
Are you using 2.9.0 or 2.4?
Aaron McCurry wrote:
>
> I also have seen this problem recently. I had to make a patch to our
> production system to at least relief the system of the deleted files
> handles. I did it by capturing all of the Descriptors that the
> FileInputIndex object crea
I commit the IndexWriter every 200 documents in a batch as follows and you
can see that I reopened the reader after the commit.
private void commit(IndexWriter writer) throws CorruptIndexException,
IOException, SQLException {
writer.commit();
if(indexReader!=null)
Hmm, not good. Can you share more details about how your app is using Lucene?
Do you also have an IndexReader that's open on this directory? Do you
reopen it after indexing documents? If so, using IndexReader.reopen
or by closing and opening a new IndexReader? Or, by getting a near
real-time r
I also have seen this problem recently. I had to make a patch to our
production system to at least relief the system of the deleted files
handles. I did it by capturing all of the Descriptors that the
FileInputIndex object creates and monitoring if the file it references
still exists or n
I switched from Lucene 2.4.0 to the latest 2.9.0 version and got too many
files open within a few hours from my indexing process. Our indexing Java
process adds about 2000 documents/minute.
The IndexWriter (iw) has the following settings:
iw.setMaxFieldLength(1024*1024*1024); // 1G
10 matches
Mail list logo