That would definitely cause problems for any searches still in flight on the old IndexReader (though I'm not certain that generic Input/ output error IOException is what would be thrown).

You can use the IndexReader's incRef/decRef methods to ensure the reader stays open for all in flight queries. Call incRef when the query starts in a synchronized block that ensures the reader isn't being closed at the same time, and then decRef (in a finally clause) when the query is completely finished. Then you can call close() on the old reader, and it will in fact remain open until all in flight queries have decRef'd it.

Mike

JulieSoko wrote:


I have been doing some reading and think something that I am doing icould be
a problem.  For each search of an index,  I first check to see if the
IndexReader is current and if not reopen it... I think I should not reopen the IndexReader until I make sure that there are no other queries running using this searcher/reader. Could this be causing my Input/Output error?


Michael McCandless-2 wrote:


What OS/filesystem are you using?

The code looks fine to me.  2 or more searches running at the same
time on the same index should be harmless; this happens in Lucene all
the time in normal applications.

Are you sure you're re-using already opened Searchers, and not
accidentally opening a new searcher per user's search?

Is it possible you are running out of file descriptors? That
IOException (generic Input/output error) means there is a low level IO
issue.

Mike

JulieSoko wrote:


Hello All,
First of all I’m new to Lucene, and have written code using it to
search
over 1 to man indexes, using a user defined query.
I don't have any code on this system so have to type everything in
here...
I have the following design but am getting
An Input / Output error exception which I have typed in a part of it
below.
My question is this?  Do I have  glaring flaw
In this design? I am reusing the IndexSearchers/IndexReaders and not
closing them.   The input/output error arises when 2 or
More searches occur at the same time over some of the same indexes.
Can you
give me some direction where I should look
For the solution to the exception?

Here is an explanation of my
  Data:
      Up to 60 different indexes used at a time
       1 directory per 1 day of data
      Millions of documents per day
      Data  is received and indexes  merged on a continual bases -
whole
separate process

  Index contains
          value: content    eventType: type of data   eventTime:
time data
collected

  1 to many users can create individual queries containing 1 or
more of
the fields and values
  searching over 1 to many indexes

  Design:
       Utilize the IndexAccessor classes  to cache
IndexSearcher/IndexReaders i.e. they are made one per index and never
closed.


       Use a ParallelMultiSearcher  - create one per request using
1 to
many of the indexes

       try {
       QueryParser parser = new QueryParser("value", new
StandardAnalyzer(0);
       parser.setDefaultOperator(QueryParser.AND_OPERATOR);
       Query query = parser.parse(queryString);

      TopDocCollector col = new TopDocCollector(MAX_NUMBER_HITS);
      multiSearcher.search(query, new
RangeFilter("eventTime",startTime,endTime,true,true),col);
      int numHits = col.getTotalHits(0);
      TopDocs docs = col.topDocs();

      if (numHits > 0))
        for (int i=0; i< numHits && i< MAX_NUMBER_HITS; i++)[
          Document doc = multiSearcher.doc(docs.scoreDocs[i].doc);
            ....
        }

   }catch(Exception e  ){
       e.printStackTrace();
} finally{
    //IndexSearchers are not closed since shared by many users
}




When the second user accesses directories used by the first query
then I get
the following error:

java.io.IOException: Input/output error
java.io.RandomAccessFile.readBytes(Native Method)
java.io.RandomAccessFile.read(RandomAccessFile.java:315)
at
org.apache.lucene.store.FSDirecotry
$FSIndexInput.readInternal(FSDirectory.java:550)
at
org
.apache
.lucene.store.BufferedIndexInput.readBytes(BufferedInputInput.java:
131)
at
org.apache.lucene.index.CompoundFileReader
$CSIndexInput.readInternal(CompoundFileReader.java:240)
at
org
.apache
.lucene.instoreBufferedIndexInput.refill(BufferedIndexInput.java:
152)
at
org
.apache
.lucene.instoreBufferedIndexInput.readByte(BufferedIndexInput.java:
152)
at org.lucene.store.IndexInput.readVInt(IndexInput.java:76)
at org.apache.lucene.index.TermBuffer.read(TermBuffer.java:63)
at org.apache.lucene.index.SegmentTermEnum.next(SegmentTermEnum.java:
123)
at
org.apache.lucene.index.SegmentTermEnum.scanTo(SegmentTermEnum.java:
154)
at
org
.apache.lucene.index.TermInfosReader.scanEnum(TermInfosReader.java:
223)
at org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:
217)
at org.apache.lucene.index.SegmentReader.docFreq(SegmentReader.java:
678)
at org.apache.lucene.search.IndexSearcher.docFreq(IndexSearcher.java:
87)
at org.apache.lucene.search.Searcher.docFreqs(searcher.java:118)
at
org
.apache.lucene.search.MultiSearcher.createWeight(MultiSearcher.java:
311)
at org.apache.lucene.search.Searcher.search(Searcher.java:178)

Thanks!

--
View this message in context:
http://www.nabble.com/Lucene-Input-Output-error-tp20156805p20156805.html
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--
View this message in context: 
http://www.nabble.com/Lucene-Input-Output-error-tp20156805p20169473.html
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to