OK thanks for bringing closure!
Accidentally allowing 2 writers to write to the same index quickly
leads to corruption. They are like the Betta fish: they fight to the
death, removing each others files, if you put them in the same cage.
Mike
On Wed, Dec 9, 2009 at 1:56 AM, Max Lynch wrote:
> H
Hi Mike,
Missed your response on this,
What I was doing was physically removing index/write.lock if older than 8
hours, allowing another process of my indexer to run. I realize in
hindsight that there is no reason why I should be doing this and it was
really stupid. I think I was under the impre
You can use o.a.l.index.CheckIndex to fix the index. It will remove
references to any segments that are missing or have problems during
testing. First run it without -fix to see what problems there are.
Then take a backup of the index. Then run it with -fix. The index
will lose all docs in thos
Missed your response, thanks Bernd.
I don't think that's it, since I haven't been executing any commands like
that. The only thing I could think of is corruption. I've got the index
backed up in case there is a way to fix it (it won't matter in a week or so
since I cull any documents older than
Hi Max
just a guess: maybe you deleted all *.c source files in that area and
unintentionally deleted this index file, too.
Bernd
On Fri, Oct 2, 2009 at 17:10, Max Lynch wrote:
> I'm getting this error when I try to run my searcher and my indexer:
>
> Traceback (most recent call last):
> self.
Wojtek212 wrote:
You were right I had 2 IndexWriters. I've checked again and it
turned out I
had 2 IndexManagers loaded by 2 different classloaders, so even if
stored
it in static Map, it didn't help.
Phew! That's tricky (two different classloaders). Good sleuthing
Anyway thanks for
Hi Mike,
You were right I had 2 IndexWriters. I've checked again and it turned out I
had 2 IndexManagers loaded by 2 different classloaders, so even if stored
it in static Map, it didn't help.
Anyway thanks for help. But I have last question. Is it correct if I use
IndexSearcher during wrking In
From this log I can see you do in fact have two IndexWriters open at
the same time (see how IW 6 and IW 42 have intermingled log lines
right before the exception).
Are you sure you're not still unlocking the index? Without unlocking
the index, and if you're using either Simple or NativeF
Here is Lucene log with first exceptions that occured (FSDirectory with
NativeFSLockFactory).
IFD [Thread-79]: setInfoStream
[EMAIL PROTECTED]
IW 4 [Thread-79]: setInfoStream:
dir=org.apache.lucene.store.FSDirectory@/tmp/content/3615.0-3618.0
autoCommit=true
[EMAIL PROTECTED]
[EMAIL PROTECTED]
ra
The strange thing is that when I use only FSDirectory with
SimpleFSLockFactory I don't see any exception (or I couldn't reproduce the
problem). FSDirectory with NativeFSLockFactory doesn't work as well as my
implementation of Directory and Lock (based on java.nio).
Hmmm, I don't see the reason of
Hmmm OK. I would stick with the NativeFSLockFactory, and never call
IndexReader.unlock.
Can you call IndexWriter.setInfoStream, and then post the resulting
log? It may provide clues of what's happening.
Also, if you can narrow this to a small test case that shows the
exception, that'd
I've checked unlock ant it is not called until exception occurs.
BTW, I' ve tried to use FSDirectorectory with NativeFSLockFactory and I
didn't get
LockObtainFailedException. I removed also this part making unlocking
(IndexReader.unlock).
The exception is:
Exception in thread "Thread-95"
org.apa
Another option is to switch to native locks (dir.setLockFactory(new
NativeFSLockFactory()), at which point you will never have to call
IndexReader.unLock because native locks are always properly released
by the OS when the JVM exits/crashes.
If on switching to native locks, and removing t
Wojtek212 wrote:
Hi Mike,
I'm sharing one instance of IndexManager across all threads and as
I've
noticed only this one is used during indexing.
OK, maybe triple check this -- because that's the only way in your
code I can see 2 IWs being live at once.
I'm unlocking before every inde
Hi Mike,
I'm sharing one instance of IndexManager across all threads and as I've
noticed only this one is used during indexing.
I'm unlocking before every indexing operation to make sure it would be
possible.
When IndexWriter is closed I assume it releases the lock and finishes its
work.
Does Ind
Are you only creating one instance of IndexManager and then sharing
that instance across all threads?
Can you put some logging/printing where you call IndexReader.unLock,
to see how often that's happening? That method is dangerous because
if you unlock a still-active IndexWriter it leads
I saw the same problem for a while. here is how I used lucene index:
1) I don't use compound file.
2) I have a single process and a single thread to update index as index
updater. The index is really small, the mergefactor is 10.
After index is updated, the same thread copies the index to a tmp
Mark Miller wrote:
Paul J. Lucas wrote:
Sorry for the radio silence. I changed my code around so that a
single IndexReader and IndexSearcher are shared. Since doing that,
I've not seen the problem. That being the case, I didn't pursue the
issue.
I still think there's a bug because the cod
Paul J. Lucas wrote:
Sorry for the radio silence. I changed my code around so that a
single IndexReader and IndexSearcher are shared. Since doing that,
I've not seen the problem. That being the case, I didn't pursue the
issue.
I still think there's a bug because the code I had previously,
That really can't be it. I have *one* client connecting to my
server. And there isn't a descriptor leak.
My mergeFactor is 10.
- Paul
On Jul 1, 2008, at 1:37 AM, Michael McCandless wrote:
Hmmm then it sounds possible you were in fact running out of file
descriptors.
What was your merg
Hmmm then it sounds possible you were in fact running out of file
descriptors.
What was your mergeFactor set to?
Mike
Paul J. Lucas wrote:
Sorry for the radio silence. I changed my code around so that a
single IndexReader and IndexSearcher are shared. Since doing that,
I've not seen
Sorry for the radio silence. I changed my code around so that a
single IndexReader and IndexSearcher are shared. Since doing that,
I've not seen the problem. That being the case, I didn't pursue the
issue.
I still think there's a bug because the code I had previously, IMHO,
should have
On Jun 12, 2008, at 6:39 AM, Michael McCandless wrote:
Hi Grant,
My stress test is unable to reproduce this exception, either. I'm
adding Wikipedia docs to an index, using a high merge factor, then
opening a new writer with low merge factor (5) and calling
optimize. This forces concur
Hi Grant,
My stress test is unable to reproduce this exception, either. I'm
adding Wikipedia docs to an index, using a high merge factor, then
opening a new writer with low merge factor (5) and calling optimize.
This forces concurrent merges to run during the optimize.
One more questio
On Jun 11, 2008, at 6:00 AM, Michael McCandless wrote:
Grant Ingersoll wrote:
Is more than one thread adding documents to the index?
I don't believe so, but I am trying to reproduce. I've only seen
it once, and don't have a lot of details, other than I noticed it
was on a specific fil
Grant Ingersoll wrote:
Is more than one thread adding documents to the index?
I don't believe so, but I am trying to reproduce. I've only seen
it once, and don't have a lot of details, other than I noticed it
was on a specific file (.fdt) and was wondering if that was a
factor or not.
On Jun 10, 2008, at 3:35 PM, Michael McCandless wrote:
Grant,
Can you describe any details on how this app is using Lucene?
It's in Solr using the trunk.
EG are you using autoCommit=false or true?
ac=false
Is more than one thread adding documents to the index?
I don't believe so, b
Grant,
Can you describe any details on how this app is using Lucene? EG are
you using autoCommit=false or true? Is more than one thread adding
documents to the index? Any changes to the defaults in IndexWriter?
After seeing that exception, does IndexReader.open also hit that
exception
Hi Paul,
Not sure if this was resolved, but I don't think it was. Can you try
reproducing this with setCompoundFile(false)? That is, turn of
compound files. I have an intermittent report of an exception that
looks eerily similar that I am trying to track down and I am not using
CFS and
Paul,
How often does your process start up? Are you really sure that there
can never be two instances of your process running? If/when you
gather the infoStream logs running up to this exception, can you also
log when IndexReader.unLock is called?
Two writers on the same index can defi
OK.
What is your mergeFactor?
Mike
Paul J. Lucas wrote:
On May 30, 2008, at 5:59 PM, Michael McCandless wrote:
One more question: when you hit that exception, does the offending
file in fact not exist (when you list the directory yourself)?
Yes, the file does not exist.
And, does the e
On May 30, 2008, at 5:59 PM, Michael McCandless wrote:
One more question: when you hit that exception, does the offending
file in fact not exist (when you list the directory yourself)?
Yes, the file does not exist.
And, does the exception keep happening consistently (same file
missing) onc
Paul,
One more question: when you hit that exception, does the offending
file in fact not exist (when you list the directory yourself)?
And, does the exception keep happening consistently (same file
missing) once that happens, or, does the same index work fine the
next time you try it (i
Paul,
What is your mergeFactor set to?
Can you get the exception to happen with infoStream set on the
writer, and post that back?
Mike
Paul J. Lucas wrote:
On May 30, 2008, at 3:05 AM, Michael McCandless wrote:
Are you indexing only one document each time you open
IndexWriter? Or do
Paul J. Lucas wrote:
On May 30, 2008, at 3:05 AM, Michael McCandless wrote:
Are you indexing only one document each time you open IndexWriter?
Or do you open a single IndexWriter, add all documents for that
directory, then close it?
The latter.
When the exception occurs, do you know how ma
On May 30, 2008, at 3:05 AM, Michael McCandless wrote:
Are you indexing only one document each time you open IndexWriter?
Or do you open a single IndexWriter, add all documents for that
directory, then close it?
The latter.
When the exception occurs, do you know how many simultaneous thre
Jamie,
The code looks better! You're not forcefully removing the write.lock
nor deleting files from the index yourself, anymore, which is good.
One thing I spotted is your VolumeIndex.deleteIndex method fails to
synchronize on the indexLock. If I understand the code correctly,
that mea
I guess my test index was corrupted some other way...I can not duplicate
my results today without breaking things with two lockless Writers
first. Oh well.
I definitely saw it legitimately while playing with
IndexReader.reopen...if I kept enough of the old IndexReaders around
long enough I wo
Hi Michael / others
The one thing I discovered was that it is quite useful to implement a
JVM shutdown hook in your code to prevent the index from getting
corrupted when an indexing process dies unexpectantly.
For those who don't know about shutdown hook mechanism, you do this by
implementin
Hi Michael
Thank you. Your suggestions were great and they were implemented (see
attached source code), however, unfortunately, I am still getting file
not found errors on the automatic merging of indexes.
Regards,
Jamie
Michael McCandless wrote:
Jamie,
I'd love to get to the root cause
A few more questions, below:
Paul J. Lucas wrote:
I have a thread than handles the unindexing/reindexing. It gets
changed from a BlockingQueue. My unindex code is like:
IndexWriter writer = new IndexWriter( INDEX, INDEX_ANALYZER, false );
final Term t = new Term( DIR_FIELD
Paul J. Lucas wrote:
On May 29, 2008, at 6:35 PM, Michael McCandless wrote:
Can you use lsof (or something similar) to see how many files you
have?
FYI: I personally can't reproduce this; only a coworker can and
even then it's sporadic, so it could take a little while.
If possible, cou
Jamie,
I'd love to get to the root cause of your exception.
Last time we talked (a few weeks back) I saw several possible causes
in the source you had posted:
http://markmail.org/message/dqovvcwgwof5f7wl
Did you test any of the ideas there? You are potentially manually
deleting file
Hi Paul,
I just noticed the discussion around this.
All most all of my customers have/are experiencing the intermittant
FileNotFound problem.
Our software uses Lucene 2.3.1. I have just upgraded to Lucene 2.3.2 in
the hope that this was one of the bugs that was fixed.
I would be very inter
On May 29, 2008, at 6:35 PM, Michael McCandless wrote:
Can you use lsof (or something similar) to see how many files you
have?
FYI: I personally can't reproduce this; only a coworker can and even
then it's sporadic, so it could take a little while.
Merging, especially several running at o
Forgot to mention...keep trying if you get read past file exception...I
get that sometimes too.
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
Michael McCandless wrote:
Michael Busch wrote:
Of course it can happen that you run out of available file
descriptors when a lot of threads open separate IndexReaders, and
then the SegmentMerger could certainly hit IOExceptions, but I don't
think a FileNotFoundException would be thrown in su
On May 29, 2008, at 6:26 PM, Michael McCandless wrote:
Paul J. Lucas wrote:
if ( IndexReader.isLocked( INDEX ) )
IndexReader.unlock( INDEX );
The isLocked()/unlock() is because sometimes the server process
gets killed and leaves teh indexed locked.
This makes me a bit
On May 29, 2008, at 5:57 PM, Mark Miller wrote:
Paul J. Lucas wrote:
Are you saying that using multiple IndexSearchers will definitely
cause the problem I am experiencing and so the suggestion that
using a single IndexSearcher for optimaztion only is wrong?
Will it definitely cause your p
Michael Busch wrote:
Of course it can happen that you run out of available file
descriptors when a lot of threads open separate IndexReaders, and
then the SegmentMerger could certainly hit IOExceptions, but I
don't think a FileNotFoundException would be thrown in such a case.
I think I'v
Paul J. Lucas wrote:
if ( IndexReader.isLocked( INDEX ) )
IndexReader.unlock( INDEX );
The isLocked()/unlock() is because sometimes the server process
gets killed and leaves teh indexed locked.
This makes me a bit nervous. Does this only run on startup of your
proces
Michael Busch wrote:
Mark Miller wrote:
Paul J. Lucas wrote:
Also, if you get a ton of concurrent searches, you will have an
IndexReader open for each...not only is this very wasteful in terms
of RAM and time, but as your IndexWriter merges you can have all
kinds of momentary references to
Mark Miller wrote:
Paul J. Lucas wrote:
Also, if you get a ton of concurrent searches, you will have an
IndexReader open for each...not only is this very wasteful in terms of
RAM and time, but as your IndexWriter merges you can have all kinds of
momentary references to normally unneeded inde
Paul J. Lucas wrote:
On May 29, 2008, at 5:18 PM, Mark Miller wrote:
It looks to me like you are not sharing an IndexSearcher across threads.
My reading of the documentation says that doing so is an optimization
only and not a requirement.
Are you saying that using multiple IndexSearchers
On May 29, 2008, at 5:18 PM, Mark Miller wrote:
It looks to me like you are not sharing an IndexSearcher across
threads.
My reading of the documentation says that doing so is an optimization
only and not a requirement.
Are you saying that using multiple IndexSearchers will definitely
ca
It looks to me like you are not sharing an IndexSearcher across threads.
You really should, or use a small pool of them (depending on
speed/ram/load).
The only time I usually see this error, I also see too many files open
first. Are you sure you don't have another exception as well?
Paul J
> Ok if I well understood I have to put the lock file at the
> same place in my indexing process and searching process.
> the code for indexing:
> System.setProperty("org.apache.lucene.lockDir", System
>.getProperty("user.dir"));
Are you sure that both the search process
For the index process I use IndexModifier class.
That happens when I try to search something into the index in the same
time that the index process still running.
the code for indexing:
System.setProperty("org.apache.lucene.lockDir", System
.getProperty("user.dir"));
Yes, I use the nfs mount to share the index for other search instance
and all the instances have same lock directory configured, but the only
the difference is that nfs mount is read-only mount, so I have to
disable the lock mechanism for search instances, only lock is enabled
for index modif
Yes, I use the nfs mount to share the index for other search instance
and all the instances have same lock directory configured, but the only
the difference is that nfs mount is read-only mount, so I have to
disable the lock mechanism for search instances, only lock is enabled
for index modific
Ok thanks a lot.
-Original Message-
From: Michael McCandless [mailto:[EMAIL PROTECTED]
Sent: 01 August 2006 17:19
To: java-user@lucene.apache.org
Subject: Re: FileNotFoundException
> Ok if I well understood I have to put the lock file at the same place
in
> my indexing proce
Ok if I well understood I have to put the lock file at the same place in
my indexing process and searching process.
That's correct.
And, that place can't be an NFS mounted directory (until we fix locking
implementation...).
The two different processes will use this lock file to make sure
Ok if I well understood I have to put the lock file at the same place in
my indexing process and searching process.
-Original Message-
From: Michael McCandless [mailto:[EMAIL PROTECTED]
Sent: 01 August 2006 17:14
To: java-user@lucene.apache.org
Subject: Re: FileNotFoundException
>
Yes
Yes, you're certain you have the same lock dir for both modifier &
search process?
Or, Yes you're using NFS as your lock dir?
Or, both?
Mike
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail
Yes
-Original Message-
From: Michael McCandless [mailto:[EMAIL PROTECTED]
Sent: 01 August 2006 17:10
To: java-user@lucene.apache.org
Subject: Re: FileNotFoundException
> I think its a directory access synchronisation problem, I have also
> posted about this before. The scenar
I think its a directory access synchronisation problem, I have also
posted about this before. The scenario can be like this ..
When Indexwriter object is created it reads the segment information from
the file "segments" which nothing but list of files with .cfs or mayn
more type, at teh same
release.
Thanks,
supriya
WATHELET Thomas wrote:
Have you solved thisproblem?
-Original Message-
From: Supriya Kumar Shyamal [mailto:[EMAIL PROTECTED]
Sent: 01 August 2006 16:30
To: java-user@lucene.apache.org
Subject: Re: FileNotFoundException
I think its a directory access synchron
Have you solved thisproblem?
-Original Message-
From: Supriya Kumar Shyamal [mailto:[EMAIL PROTECTED]
Sent: 01 August 2006 16:30
To: java-user@lucene.apache.org
Subject: Re: FileNotFoundException
I think its a directory access synchronisation problem, I have also
posted about this
other file with a new name) so
the IndexSearcher can't find it.
-Original Message-
From: Erick Erickson [mailto:[EMAIL PROTECTED]
Sent: 01 August 2006 15:49
To: java-user@lucene.apache.org
Subject: Re: FileNotFoundException
So it sounds like you're not writing the index to the pla
writer had closed before you looked.
Erick
On 8/1/06, WATHELET Thomas <[EMAIL PROTECTED]> wrote:
>
> It's the same when I try to open the index with luke
>
> -Original Message-
> From: Erick Erickson [mailto:[EMAIL PROTECTED]
> Sent: 01 August 2006 15:24
> T
rick
On 8/1/06, WATHELET Thomas <[EMAIL PROTECTED]> wrote:
It's the same when I try to open the index with luke
-Original Message-
From: Erick Erickson [mailto:[EMAIL PROTECTED]
Sent: 01 August 2006 15:24
To: java-user@lucene.apache.org
Subject: Re: FileNotFoundException
two
It's the same when I try to open the index with luke
-Original Message-
From: Erick Erickson [mailto:[EMAIL PROTECTED]
Sent: 01 August 2006 15:24
To: java-user@lucene.apache.org
Subject: Re: FileNotFoundException
two things come to mind
1> are you absolutely sure that you
g:
MultiSearcher multisearch = new MultiSearcher(indexsearcher);
Hits hits = this.multisearch.search(this.getBoolQuery());
...
-Original Message-
From: Michael McCandless [mailto:[EMAIL PROTECTED]
Sent: 01 August 2006 13:45
To: java-user@lucene.apache.org
Subject: Re: Fil
cher(indexsearcher);
Hits hits = this.multisearch.search(this.getBoolQuery());
...
-Original Message-
From: Michael McCandless [mailto:[EMAIL PROTECTED]
Sent: 01 August 2006 13:45
To: java-user@lucene.apache.org
Subject: Re: FileNotFoundException
> When the indexin
When the indexing process still running on a index and I try to search
something on this index I retrive this error message:
java.io.FileNotFoundException:
\\tradluxstmp01\JavaIndex\tra\index_EN\_2hea.fnm (The system cannot find
the file specified)
How can I solve this.
Could you provide some
Can anybody suggest how to avoid this problem and concurrently access
in the index accroos the network at the same time maintaining the index.
Unfortunately, there are known issues with locking and NFS. The lock
files (and underlying locking protocol) do not work reliably when used
over NFS
You may try to update a copy of the index and then
either replace the live index with the updated one
or instruct other instances to update the index path.
You may try this scenario if your index size is manageable. Hope this helps.
Regards,
kapilChhabra
Supriya Kumar Shyamal wrote:
I have comm
Hi Otis,
Thanks for your reply.
I will also put the writer shutdown hook for this index, as you said.
I had already done that for other part of our code where we use other
lucene index, but thought it would not be needed for this special index
due to the fact that we rarely write on it. But th
Hi Olivier,
You have shutdown hooks for read-only operations. They won't corrupt your
index. I'd add shutdown hooks for IndexWriter.
If that fixes your problem, it would be great if you could add your shutdown
hook code to the FAQ on the Wiki, or at least post it to java-user, so somebody
els
Hi everybody,
I ran the same code on linux and it has worked very well. It could be
related to OS resource issue, but I am not sure as did not try to debug
on windows. I hope this help others in case of such problems.
thanks
Amol
amolb wrote:
Hi everybody,
I am trying to index arround 10 la
Ok, your directory exists.
if ((indexFile = new File(indexDir)).exists() && indexFile.isDirectory())
{
exists = false;
System.out.println("Index does not exist");
}
now is exists == false
at this point:
writer = new IndexWriter(indexFile, new StandardAnalyzer(), exists);
exists is still fals
om: bib_lucene bib [mailto:[EMAIL PROTECTED]
Sent: Friday, 8 July 2005 8:21 AM
To: java-user@lucene.apache.org
Subject: Re: FileNotFoundException segments
This is a new directory, created just before this step.
I am uploading files to this directory. The file is getting uploaded
fine.
Any
This is a new directory, created just before this step.
I am uploading files to this directory. The file is getting uploaded fine.
Any ideas?
Muetze303 <[EMAIL PROTECTED]> wrote:
probably the dir exists, but the index inside the dir is broken or not
complete and you are trying to use it instead o
probably the dir exists, but the index inside the dir is broken or not
complete and you are trying to use it instead of creating a new one?!
bib_lucene bib wrote:
Hi All
can someone please help me on the error in my web application...
I am using tomcat , the path for index dir is obtained fr
84 matches
Mail list logo