Greetings,
(You're receiving this e-mail because you're on a DL or I think you'd
be interested)
It's time for another Hadoop/Lucene/Apache "Cloud" stack meetup! This
month it'll be on Wednesday, the 28th, at 6:45 pm.
A *huge* thanks for everyone who showed up last month, and to Facebook
for send
Super! Thanks for bringing closure.
Mike
On Sun, Oct 18, 2009 at 2:31 PM, GlenAbbeyDrive wrote:
>
> Using a 'realtime' reader off the IndexWriter (writer.getReader()) instead of
> indexReader.reopen() seems fixed this problem.
>
>
> Michael McCandless-2 wrote:
>>
>> Hmm, not good. Can you shar
Using a 'realtime' reader off the IndexWriter (writer.getReader()) instead of
indexReader.reopen() seems fixed this problem.
Michael McCandless-2 wrote:
>
> Hmm, not good. Can you share more details about how your app is using
> Lucene?
>
> Do you also have an IndexReader that's open on this
Hi Glen:
I think it is in your application code:
The indexReader returned is not closed if the underlying index has changed.
If your update rate is high, you will run into this issue because GC may not
have caught up with the FH leak.
THe code should instead be:
if (indexReader!=null){
I
I actually have seen the problem in both, with our app. Not sure if
it's how we are using lucene or if the problem is internal. We
actually only saw the problem recently because we starting using a new
linux machine that had it's open file handle settings too low. And
with our volume it
Are you using 2.9.0 or 2.4?
Aaron McCurry wrote:
>
> I also have seen this problem recently. I had to make a patch to our
> production system to at least relief the system of the deleted files
> handles. I did it by capturing all of the Descriptors that the
> FileInputIndex object crea
I commit the IndexWriter every 200 documents in a batch as follows and you
can see that I reopened the reader after the commit.
private void commit(IndexWriter writer) throws CorruptIndexException,
IOException, SQLException {
writer.commit();
if(indexReader!=null)
Hmm, not good. Can you share more details about how your app is using Lucene?
Do you also have an IndexReader that's open on this directory? Do you
reopen it after indexing documents? If so, using IndexReader.reopen
or by closing and opening a new IndexReader? Or, by getting a near
real-time r
I also have seen this problem recently. I had to make a patch to our
production system to at least relief the system of the deleted files
handles. I did it by capturing all of the Descriptors that the
FileInputIndex object creates and monitoring if the file it references
still exists or n
I switched from Lucene 2.4.0 to the latest 2.9.0 version and got too many
files open within a few hours from my indexing process. Our indexing Java
process adds about 2000 documents/minute.
The IndexWriter (iw) has the following settings:
iw.setMaxFieldLength(1024*1024*1024); // 1G
Jamie did you ever get to the bottom of this?
Can you reduce this code down to a smaller example that shows the hang?
Also, can you post a thread stack dump when you hit the hang?
Is it possible you are adding documents from one thread while calling
IndexWriter.close in another? I see you have
The termBuffer is just a buffer of a arbitrary length (the length is
over-allocated with some additional chars, that a new buffer does not need
to be allocated whenever a new char is added (it woks the same like
stringbuffer). The termLength() contains the number of "valid" chars in the
buffer. If
Sorry,What does it mean, to respect the termLength() ?
On Sun, Oct 18, 2009 at 11:37 AM, Uwe Schindler wrote:
> You must also respect termLength() which returns the number of "valid"
> chars
> in the term buffer.
>
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.theta
You must also respect termLength() which returns the number of "valid" chars
in the term buffer.
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de
> -Original Message-
> From: David Ginzburg [mailto:davidginzb...@gmail.com]
> Sent: Sun
14 matches
Mail list logo