using two Writer's simultaneously.
>>
>> - Mark
> P.S.
>
> Dont switch back to not sharing! Even your one client must enjoy not
> having to wait for that new Searcher to load up on every search :)
> Especially if you have any sort caches.
>
> - Mark
>
> ---
Mark Miller wrote:
Paul J. Lucas wrote:
Sorry for the radio silence. I changed my code around so that a
single IndexReader and IndexSearcher are shared. Since doing that,
I've not seen the problem. That being the case, I didn't pursue the
issue.
I still think there's a bug because the cod
Paul J. Lucas wrote:
Sorry for the radio silence. I changed my code around so that a
single IndexReader and IndexSearcher are shared. Since doing that,
I've not seen the problem. That being the case, I didn't pursue the
issue.
I still think there's a bug because the code I had previously,
That really can't be it. I have *one* client connecting to my
server. And there isn't a descriptor leak.
My mergeFactor is 10.
- Paul
On Jul 1, 2008, at 1:37 AM, Michael McCandless wrote:
Hmmm then it sounds possible you were in fact running out of file
descriptors.
What was your merg
Hmmm then it sounds possible you were in fact running out of file
descriptors.
What was your mergeFactor set to?
Mike
Paul J. Lucas wrote:
Sorry for the radio silence. I changed my code around so that a
single IndexReader and IndexSearcher are shared. Since doing that,
I've not seen
Sorry for the radio silence. I changed my code around so that a
single IndexReader and IndexSearcher are shared. Since doing that,
I've not seen the problem. That being the case, I didn't pursue the
issue.
I still think there's a bug because the code I had previously, IMHO,
should have
On Jun 12, 2008, at 6:39 AM, Michael McCandless wrote:
Hi Grant,
My stress test is unable to reproduce this exception, either. I'm
adding Wikipedia docs to an index, using a high merge factor, then
opening a new writer with low merge factor (5) and calling
optimize. This forces concur
Hi Grant,
My stress test is unable to reproduce this exception, either. I'm
adding Wikipedia docs to an index, using a high merge factor, then
opening a new writer with low merge factor (5) and calling optimize.
This forces concurrent merges to run during the optimize.
One more questio
On Jun 11, 2008, at 6:00 AM, Michael McCandless wrote:
Grant Ingersoll wrote:
Is more than one thread adding documents to the index?
I don't believe so, but I am trying to reproduce. I've only seen
it once, and don't have a lot of details, other than I noticed it
was on a specific fil
Grant Ingersoll wrote:
Is more than one thread adding documents to the index?
I don't believe so, but I am trying to reproduce. I've only seen
it once, and don't have a lot of details, other than I noticed it
was on a specific file (.fdt) and was wondering if that was a
factor or not.
On Jun 10, 2008, at 3:35 PM, Michael McCandless wrote:
Grant,
Can you describe any details on how this app is using Lucene?
It's in Solr using the trunk.
EG are you using autoCommit=false or true?
ac=false
Is more than one thread adding documents to the index?
I don't believe so, b
Grant,
Can you describe any details on how this app is using Lucene? EG are
you using autoCommit=false or true? Is more than one thread adding
documents to the index? Any changes to the defaults in IndexWriter?
After seeing that exception, does IndexReader.open also hit that
exception
Hi Paul,
Not sure if this was resolved, but I don't think it was. Can you try
reproducing this with setCompoundFile(false)? That is, turn of
compound files. I have an intermittent report of an exception that
looks eerily similar that I am trying to track down and I am not using
CFS and
Paul,
How often does your process start up? Are you really sure that there
can never be two instances of your process running? If/when you
gather the infoStream logs running up to this exception, can you also
log when IndexReader.unLock is called?
Two writers on the same index can defi
OK.
What is your mergeFactor?
Mike
Paul J. Lucas wrote:
On May 30, 2008, at 5:59 PM, Michael McCandless wrote:
One more question: when you hit that exception, does the offending
file in fact not exist (when you list the directory yourself)?
Yes, the file does not exist.
And, does the e
On May 30, 2008, at 5:59 PM, Michael McCandless wrote:
One more question: when you hit that exception, does the offending
file in fact not exist (when you list the directory yourself)?
Yes, the file does not exist.
And, does the exception keep happening consistently (same file
missing) onc
Paul,
One more question: when you hit that exception, does the offending
file in fact not exist (when you list the directory yourself)?
And, does the exception keep happening consistently (same file
missing) once that happens, or, does the same index work fine the
next time you try it (i
Paul,
What is your mergeFactor set to?
Can you get the exception to happen with infoStream set on the
writer, and post that back?
Mike
Paul J. Lucas wrote:
On May 30, 2008, at 3:05 AM, Michael McCandless wrote:
Are you indexing only one document each time you open
IndexWriter? Or do
Paul J. Lucas wrote:
On May 30, 2008, at 3:05 AM, Michael McCandless wrote:
Are you indexing only one document each time you open IndexWriter?
Or do you open a single IndexWriter, add all documents for that
directory, then close it?
The latter.
When the exception occurs, do you know how ma
On May 30, 2008, at 3:05 AM, Michael McCandless wrote:
Are you indexing only one document each time you open IndexWriter?
Or do you open a single IndexWriter, add all documents for that
directory, then close it?
The latter.
When the exception occurs, do you know how many simultaneous thre
Jamie,
The code looks better! You're not forcefully removing the write.lock
nor deleting files from the index yourself, anymore, which is good.
One thing I spotted is your VolumeIndex.deleteIndex method fails to
synchronize on the indexLock. If I understand the code correctly,
that mea
I guess my test index was corrupted some other way...I can not duplicate
my results today without breaking things with two lockless Writers
first. Oh well.
I definitely saw it legitimately while playing with
IndexReader.reopen...if I kept enough of the old IndexReaders around
long enough I wo
Hi Michael / others
The one thing I discovered was that it is quite useful to implement a
JVM shutdown hook in your code to prevent the index from getting
corrupted when an indexing process dies unexpectantly.
For those who don't know about shutdown hook mechanism, you do this by
implementin
Hi Michael
Thank you. Your suggestions were great and they were implemented (see
attached source code), however, unfortunately, I am still getting file
not found errors on the automatic merging of indexes.
Regards,
Jamie
Michael McCandless wrote:
Jamie,
I'd love to get to the root cause
A few more questions, below:
Paul J. Lucas wrote:
I have a thread than handles the unindexing/reindexing. It gets
changed from a BlockingQueue. My unindex code is like:
IndexWriter writer = new IndexWriter( INDEX, INDEX_ANALYZER, false );
final Term t = new Term( DIR_FIELD
Paul J. Lucas wrote:
On May 29, 2008, at 6:35 PM, Michael McCandless wrote:
Can you use lsof (or something similar) to see how many files you
have?
FYI: I personally can't reproduce this; only a coworker can and
even then it's sporadic, so it could take a little while.
If possible, cou
Jamie,
I'd love to get to the root cause of your exception.
Last time we talked (a few weeks back) I saw several possible causes
in the source you had posted:
http://markmail.org/message/dqovvcwgwof5f7wl
Did you test any of the ideas there? You are potentially manually
deleting file
Hi Paul,
I just noticed the discussion around this.
All most all of my customers have/are experiencing the intermittant
FileNotFound problem.
Our software uses Lucene 2.3.1. I have just upgraded to Lucene 2.3.2 in
the hope that this was one of the bugs that was fixed.
I would be very inter
On May 29, 2008, at 6:35 PM, Michael McCandless wrote:
Can you use lsof (or something similar) to see how many files you
have?
FYI: I personally can't reproduce this; only a coworker can and even
then it's sporadic, so it could take a little while.
Merging, especially several running at o
Forgot to mention...keep trying if you get read past file exception...I
get that sometimes too.
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
Michael McCandless wrote:
Michael Busch wrote:
Of course it can happen that you run out of available file
descriptors when a lot of threads open separate IndexReaders, and
then the SegmentMerger could certainly hit IOExceptions, but I don't
think a FileNotFoundException would be thrown in su
On May 29, 2008, at 6:26 PM, Michael McCandless wrote:
Paul J. Lucas wrote:
if ( IndexReader.isLocked( INDEX ) )
IndexReader.unlock( INDEX );
The isLocked()/unlock() is because sometimes the server process
gets killed and leaves teh indexed locked.
This makes me a bit
On May 29, 2008, at 5:57 PM, Mark Miller wrote:
Paul J. Lucas wrote:
Are you saying that using multiple IndexSearchers will definitely
cause the problem I am experiencing and so the suggestion that
using a single IndexSearcher for optimaztion only is wrong?
Will it definitely cause your p
Michael Busch wrote:
Of course it can happen that you run out of available file
descriptors when a lot of threads open separate IndexReaders, and
then the SegmentMerger could certainly hit IOExceptions, but I
don't think a FileNotFoundException would be thrown in such a case.
I think I'v
Paul J. Lucas wrote:
if ( IndexReader.isLocked( INDEX ) )
IndexReader.unlock( INDEX );
The isLocked()/unlock() is because sometimes the server process
gets killed and leaves teh indexed locked.
This makes me a bit nervous. Does this only run on startup of your
proces
Michael Busch wrote:
Mark Miller wrote:
Paul J. Lucas wrote:
Also, if you get a ton of concurrent searches, you will have an
IndexReader open for each...not only is this very wasteful in terms
of RAM and time, but as your IndexWriter merges you can have all
kinds of momentary references to
Mark Miller wrote:
Paul J. Lucas wrote:
Also, if you get a ton of concurrent searches, you will have an
IndexReader open for each...not only is this very wasteful in terms of
RAM and time, but as your IndexWriter merges you can have all kinds of
momentary references to normally unneeded inde
Paul J. Lucas wrote:
On May 29, 2008, at 5:18 PM, Mark Miller wrote:
It looks to me like you are not sharing an IndexSearcher across threads.
My reading of the documentation says that doing so is an optimization
only and not a requirement.
Are you saying that using multiple IndexSearchers
On May 29, 2008, at 5:18 PM, Mark Miller wrote:
It looks to me like you are not sharing an IndexSearcher across
threads.
My reading of the documentation says that doing so is an optimization
only and not a requirement.
Are you saying that using multiple IndexSearchers will definitely
ca
It looks to me like you are not sharing an IndexSearcher across threads.
You really should, or use a small pool of them (depending on
speed/ram/load).
The only time I usually see this error, I also see too many files open
first. Are you sure you don't have another exception as well?
Paul J
I occasionally get a FileNotFoundException like:
Exception in thread "Thread-44" org.apache.lucene.index.MergePolicy
$MergeException: java.io.FileNotFoundException: /Stuff/Caches/
AuroraSupport/IM_IndexCache/INDEX/_27.cfs (No such file or directory)
at org.apache.lucene.index.ConcurrentMergeSc
41 matches
Mail list logo