When you run the optimize, is it with the already opened IndexWriter on the old directory?

Do you use the default locking impl. in Directory (SimpleFSLockFactory)? Are you changing the lock directory, or ever forcefully removing the lock file (or calling IndexReader/ Writer#unlock)?

Most often, an entirely missing file is due to two writers accidentally being opened at the same time on the same index.

If you can use IndexWriter.setInfoStream to turn on logging & capture all logs leading up to the missing file then I could dig further.

Mike

On Oct 17, 2008, at 4:09 AM, mahdi yari wrote:

just one IndexWriter writes on Index and after a specific time(like one day or one week etc), i create a new IndexWriter on new Directory and another
thread try to optimize old IndexWirter.

On Thu, Oct 16, 2008 at 5:25 PM, Michael McCandless <
[EMAIL PROTECTED]> wrote:


Can you post the full traceback for your exception, and describe your
indexing process as well?

Mike


mahdi yari wrote:

hi dears
i have same problem
i indexing on Ubuntu Linux Distro and i have large index (>30G) and
mergeFactor = 10,
my Lucene version is 2.2.0
i think this maybe bug on Lucene 2.2.0

but i get this error sometimes, not always
thanks alot


On Thu, Oct 16, 2008 at 3:01 PM, Chaula Ganatra <[EMAIL PROTECTED]>
wrote:

Hi,

I am again getting the following error while optimization.


java.io.FileNotFoundException: \\machine01\indexes\_w5.cfs (The system
cannot find the file specified)
16:20:57,533 INFO  [STDOUT] : 140       at
java.io.RandomAccessFile.open(Native Method)
16:20:57,533 INFO  [STDOUT] : 140       at
java.io.RandomAccessFile.<init>(Unknown Source)
16:20:57,533 INFO  [STDOUT] : 140       at
org.apache.lucene.store.FSDirectory$FSIndexInput $Descriptor.<init>(FSDir
ectory.java:506)
16:20:57,533 INFO  [STDOUT] : 140       at
org.apache.lucene.store.FSDirectory $FSIndexInput.<init>(FSDirectory.java
:536)
16:20:57,565 INFO  [STDOUT] : 140       at
org.apache.lucene.store.FSDirectory.openInput(FSDirectory.java:445)
16:20:57,565 INFO  [STDOUT] : 140       at
org .apache .lucene.index.CompoundFileReader.<init>(CompoundFileReader.jav
a:70)
16:20:57,565 INFO  [STDOUT] : 140       at
org .apache.lucene.index.SegmentReader.initialize(SegmentReader.java: 181)
16:20:57,565 INFO  [STDOUT] : 140       at
org.apache.lucene.index.SegmentReader.get(SegmentReader.java:167)
16:20:57,565 INFO  [STDOUT] : 140       at
org.apache.lucene.index.SegmentReader.get(SegmentReader.java:139)
16:20:57,565 INFO  [STDOUT] : 140       at
org .apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java: 1867)
16:20:57,565 INFO  [STDOUT] : 140       at
org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:1231)


If I try to optimize it again then also getting the same error.

Can anyone please help me out? It is occurring on live environment.


Regards,
Chaula

-----Original Message-----
From: Michael McCandless [mailto:[EMAIL PROTECTED]
Sent: 26 September, 2008 8:00 PM
To: java-user@lucene.apache.org
Subject: Re: How to restore corrupted index


It's perfectly fine to have a reader open on an index, while an
IndexWriter runs optimize.

Which version of Lucene are you using?  And which OS & filesystem?

Mike

Chaula Ganatra wrote:

It was the Reader on same index, which I did not close so gave
exception
in writer.optimise()

Chaula

-----Original Message-----
From: Michael McCandless [mailto:[EMAIL PROTECTED]
Sent: 26 September, 2008 7:17 PM
To: java-user@lucene.apache.org
Subject: Re: How to restore corrupted index


Can you post the full stack trace in both cases?

Mike

Chaula Ganatra wrote:

I found one case when such multiple files are remained, when we call
writer.optimise() it throws exception and multiple files remained in
index dir.

After such multiple files, when we add document in index by calling
writer.addDocument it throws java.lang.NegativeArraySizeException

Regards,
Chaula

-----Original Message-----
From: Grant Ingersoll [mailto:[EMAIL PROTECTED]
Sent: 26 September, 2008 6:02 PM
To: java-user@lucene.apache.org
Subject: Re: How to restore corrupted index

There is the CheckIndex tool included in the distribution for
checking/
fixing bad indexes, but it can't solve everything.

The bigger question is why it is happening to begin with. Can you
describe your indexing process?  How do you know the index is
actually
corrupted?  Are you seeing exceptions when opening it?

-Grant
On Sep 26, 2008, at 6:49 AM, Chaula Ganatra wrote:

We have an application in which index will be updated frequently.

During development time, found that index files gets corrupted, i.e. more than one cfs files,some other extension files e.g. frq, fnm,
nrm

Remains there in index directory.

Is there any way that such issue does not occur at all or if it
happens
we can recover the index data again?

It would be a great help, if some one can.





Regards,

Chaula






--------------------------
Grant Ingersoll
http://www.lucidimagination.com

Lucene Helpful Hints:
http://wiki.apache.org/lucene-java/BasicsOfPerformance
http://wiki.apache.org/lucene-java/LuceneFAQ








---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to