ready at the maxMergeMb limit and never get merged through the normal
indexing process.
-Original Message-
From: Anshum [mailto:ansh...@gmail.com]
Sent: Tuesday, August 24, 2010 12:11 AM
To: java-user@lucene.apache.org
Subject: Re: Wanting batch update to avoid high disk usage
Hi Justin,
very time. So it looks like our only option
is to bail out when there's not enough space to duplicate the existing index.
- Original Message
From: "Beard, Brian"
To: java-user@lucene.apache.org
Sent: Tue, August 24, 2010 8:19:52 AM
Subject: RE: Wanting batch update to avo
hrough the normal
indexing process.
-Original Message-
From: Anshum [mailto:ansh...@gmail.com]
Sent: Tuesday, August 24, 2010 12:11 AM
To: java-user@lucene.apache.org
Subject: Re: Wanting batch update to avoid high disk usage
Hi Justin,
Lucene does not reclaim space, each update translates to
g
> Sent: Mon, August 23, 2010 10:18:36 PM
> Subject: Re: Wanting batch update to avoid high disk usage
>
> Don't bother calling expunge deletes so often, makes no sense. Instead,
> call
> it once at the end, though, you are calling the optimize method in the end
> anyways so
commit would be
required at some point.
- Original Message
From: Anshum
To: java-user@lucene.apache.org
Sent: Mon, August 23, 2010 10:18:36 PM
Subject: Re: Wanting batch update to avoid high disk usage
Don't bother calling expunge deletes so often, makes no sense. Instead, ca
Don't bother calling expunge deletes so often, makes no sense. Instead, call
it once at the end, though, you are calling the optimize method in the end
anyways so should take care of itself. there shouldn't be any difference
(but degradation in performance) on adding a call to expungedeletes().
--
In an attempt to avoid doubling disk usage when adding new fields to all
existing documents, I added a call to IndexWriter::expungeDeletes. Then my
colleague pointed out that Lucene will rewrite the potentially large segment
files each time that method is called.
reader = writer.getReader();