We have been using version 4.10.4 for quite some time and ran into the
following issue.
Out of the clear blue, one of our clients sees the exception cited below.
We see no prior evidence of anything going awry in our log files. This
literally seems to occur out of nowhere.
Is there any known issu
Hi,
I sometimes get FileNotFoundExceptions from the recovery of a core in my
log. Does anyone know the reason for this? As I understand Solr this may
(or should) not happen.
Markus
2015-08-04
15:06:07,646|INFO|mpKPXpbUwp|org.apache.solr.update.UpdateLog|Starting to
buffer updates. FSUpdateLog{st
es it is also some "*.frq" file...
I suspected that it can be caused of merging segments by IW, however you
claim it should be fine...
What do you think?
--
View this message in context:
http://lucene.472066.n3.nabble.com/reopen-with-optimize-and-FileNotFoundException-tp265p26621
On Wed, Mar 9, 2011 at 2:44 PM, bart_212 wrote:
> Hi,
> I have two web applications that uses lucene 2.3.2. Both share the same
> index and can write or read. Writing is synchronized based on file system to
> allow only one IndexWriter to work at the moment. There can be multiple
> IndexReader. In
ing
> or maybe there is some problem with usage? Please clarify.
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/reopen-with-optimize-and-FileNotFoundException-tp2656875p2656875.html
> Sent from the Lucene - Java Users mailing list archive at Nabble.co
ght inside, however is it ok that the file is missing
or maybe there is some problem with usage? Please clarify.
--
View this message in context:
http://lucene.472066.n3.nabble.com/reopen-with-optimize-and-FileNotFoundException-tp2656875p2656875.html
Sent from the Lucene - Java Users mailing list
ndexReader / IndexWriter - FileNotFoundException
À: java-user@lucene.apache.org
Date: Samedi 9 Janvier 2010, 18h54
Can you double check that you're not creating 2 writers on the same
directory, somehow?
Or: is there any other process that removes files from this directory?
Answering your ori
rocessConnection(ChannelSocket.java:703)
> at
> org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:895)
> at
> org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689)
> at java.lang.Thread.run(Thread.java:619)
>
&g
McCandless a
écrit :
De: Michael McCandless
Objet: Re: Concurrent access IndexReader / IndexWriter - FileNotFoundException
À: java-user@lucene.apache.org
Date: Samedi 9 Janvier 2010, 14h51
Can you post the full FNFE stack trace?
Mike
On Fri, Jan 8, 2010 at 5:35 AM, legrand thomas wrote:
>
Can you post the full FNFE stack trace?
Mike
On Fri, Jan 8, 2010 at 5:35 AM, legrand thomas wrote:
> Hi,
>
> I often get a FileNotFoundException when my single IndexWriter commits while
> the IndexReader also tries to read. My application is multithreaded (Tomcat
> uses the bu
de : Ven 8.1.10, Michael McCandless a
écrit :
De: Michael McCandless
Objet: Re: Concurrent access IndexReader / IndexWriter - FileNotFoundException
À: java-user@lucene.apache.org
Date: Vendredi 8 Janvier 2010, 13h00
Normally, this (using an IndexReader, [re-]opening a new IndexReader
while an Inde
Fri, Jan 8, 2010 at 5:35 AM, legrand thomas wrote:
> Hi,
>
> I often get a FileNotFoundException when my single IndexWriter commits while
> the IndexReader also tries to read. My application is multithreaded (Tomcat
> uses the business APIs); I firstly thought the read/write acces
Hi,
I often get a FileNotFoundException when my single IndexWriter commits while
the IndexReader also tries to read. My application is multithreaded (Tomcat
uses the business APIs); I firstly thought the read/write access was
thread-safe but I probably forget something.
Please help me to
OK thanks for bringing closure!
Accidentally allowing 2 writers to write to the same index quickly
leads to corruption. They are like the Betta fish: they fight to the
death, removing each others files, if you put them in the same cage.
Mike
On Wed, Dec 9, 2009 at 1:56 AM, Max Lynch wrote:
> H
Hi Mike,
Missed your response on this,
What I was doing was physically removing index/write.lock if older than 8
hours, allowing another process of my indexer to run. I realize in
hindsight that there is no reason why I should be doing this and it was
really stupid. I think I was under the impre
You can use o.a.l.index.CheckIndex to fix the index. It will remove
references to any segments that are missing or have problems during
testing. First run it without -fix to see what problems there are.
Then take a backup of the index. Then run it with -fix. The index
will lose all docs in thos
Missed your response, thanks Bernd.
I don't think that's it, since I haven't been executing any commands like
that. The only thing I could think of is corruption. I've got the index
backed up in case there is a way to fix it (it won't matter in a week or so
since I cull any documents older than
Hi Max
just a guess: maybe you deleted all *.c source files in that area and
unintentionally deleted this index file, too.
Bernd
On Fri, Oct 2, 2009 at 17:10, Max Lynch wrote:
> I'm getting this error when I try to run my searcher and my indexer:
>
> Traceback (most recent call last):
> self.
I'm getting this error when I try to run my searcher and my indexer:
Traceback (most recent call last):
self.searcher = lucene.IndexSearcher(self.directory)
JavaError: java.io.FileNotFoundException: /home/spider/misc/index/_275c.cfs
(No such file or directory)
I don't know anything about the form
achine.
> We get a repeatable FileNotFoundException because the path to the file
> is wrong:
>
> D:\data0\impact\ordering\prod\work\search_index\s_index1251456210140_0.c
> fs
> Instead of
> D:\data0\impact\ordering\prod\work\search_index\s_index1251456210140\_0.
> cfs
>
&
Ups, sorry 2.4.1
Thx
Uwe Goetzke
-Ursprüngliche Nachricht-
Von: Uwe Schindler [mailto:u...@thetaphi.de]
Gesendet: Montag, 31. August 2009 17:42
An: java-user@lucene.apache.org
Betreff: RE: MergePolicy$MergeException because of FileNotFoundException
because wrong path to index-file
o: java-user@lucene.apache.org
> Subject: MergePolicy$MergeException because of FileNotFoundException
> because wrong path to index-file
>
> We have an IndexWriter.optimize running on 4 Proc Xenon Java 1.5 Win2003
> machine.
> We get a repeatable FileNotFoundException because the path to the
We have an IndexWriter.optimize running on 4 Proc Xenon Java 1.5 Win2003
machine.
We get a repeatable FileNotFoundException because the path to the file
is wrong:
D:\data0\impact\ordering\prod\work\search_index\s_index1251456210140_0.c
fs
Instead of
D:\data0\impact\ordering\prod\work\search_index
Wojtek212 wrote:
You were right I had 2 IndexWriters. I've checked again and it
turned out I
had 2 IndexManagers loaded by 2 different classloaders, so even if
stored
it in static Map, it didn't help.
Phew! That's tricky (two different classloaders). Good sleuthing
Anyway thanks for
ing wrking IndexWriter? Or should these operations be
synchronized?
--
View this message in context:
http://www.nabble.com/FileNotFoundException-during-indexing-tp18766343p18796641.html
Sent from the Lucene - Java Users mailing list archive at
pache.lucene.index.SegmentReader.get(SegmentReader.java:262)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:205)
at
org.apache.lucene.index.IndexWriter.applyDeletes(IndexWriter.java:
3441)
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:2638)
at
.get(SegmentReader.java:205)
at
org.apache.lucene.index.IndexWriter.applyDeletes(IndexWriter.java:3441)
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:2638)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:2523)
at org.apache.
x27;t see the reason of such behaviour...
--
View this message in context:
http://www.nabble.com/FileNotFoundException-during-indexing-tp18766343p18773824.html
Sent from the Lucene - Java Users mailing list archive at Nabble.com.
---
ex.ConcurrentMergeScheduler
$MergeThread.run(ConcurrentMergeScheduler.java:240)
So if the LockObtainFailedException doesn't occur may I assume that
there
are not 2 indexers writing at the same time? Mike, what do you think?
Above test was made on lucene 2.3.2.
--
View this message in context:
y I assume that there
are not 2 indexers writing at the same time? Mike, what do you think?
Above test was made on lucene 2.3.2.
--
View this message in context:
http://www.nabble.com/FileNotFoundException-during-indexing-tp18766343p18772749.html
Sent from the Lucene - Java
ished?
It's the only one situation I can imagine that there 2 IndexWriters...
--
View this message in context:
http://www.nabble.com/FileNotFoundException-during-indexing-tp18766343p18769652.html
Sent from the Lucene - Java Users mailing l
Wojtek212 wrote:
Hi Mike,
I'm sharing one instance of IndexManager across all threads and as
I've
noticed only this one is used during indexing.
OK, maybe triple check this -- because that's the only way in your
code I can see 2 IWs being live at once.
I'm unlocking before every inde
its
work.
Does IndexWriter executes some threads and doesn't wait when they are
finished?
It's the only one situation I can imagine that there 2 IndexWriters...
--
View this message in context:
http://www.nabble.com/FileNotFoundException-during-indexing-tp18766343p18769652.html
Sent fro
during indexing but the exception occurs. Does
anybody
have idea what can be a reason?
--
View this message in context:
http://www.nabble.com/FileNotFoundException-during-indexing-tp18766343p18766343.html
Sent from the Lucene - Java Users mailing list archive at Nabble.com.
---
ng during indexing but the exception occurs. Does anybody
have idea what can be a reason?
--
View this message in context:
http://www.nabble.com/FileNotFoundException-during-indexing-tp18766343p18766343.html
Sent from the Lucene - Java Users mailing list archive at Nabble.com.
using two Writer's simultaneously.
>>
>> - Mark
> P.S.
>
> Dont switch back to not sharing! Even your one client must enjoy not
> having to wait for that new Searcher to load up on every search :)
> Especially if you have any sort caches.
>
> - Mark
>
> ---
Mark Miller wrote:
Paul J. Lucas wrote:
Sorry for the radio silence. I changed my code around so that a
single IndexReader and IndexSearcher are shared. Since doing that,
I've not seen the problem. That being the case, I didn't pursue the
issue.
I still think there's a bug because the cod
Paul J. Lucas wrote:
Sorry for the radio silence. I changed my code around so that a
single IndexReader and IndexSearcher are shared. Since doing that,
I've not seen the problem. That being the case, I didn't pursue the
issue.
I still think there's a bug because the code I had previously,
That really can't be it. I have *one* client connecting to my
server. And there isn't a descriptor leak.
My mergeFactor is 10.
- Paul
On Jul 1, 2008, at 1:37 AM, Michael McCandless wrote:
Hmmm then it sounds possible you were in fact running out of file
descriptors.
What was your merg
Hmmm then it sounds possible you were in fact running out of file
descriptors.
What was your mergeFactor set to?
Mike
Paul J. Lucas wrote:
Sorry for the radio silence. I changed my code around so that a
single IndexReader and IndexSearcher are shared. Since doing that,
I've not seen
Sorry for the radio silence. I changed my code around so that a
single IndexReader and IndexSearcher are shared. Since doing that,
I've not seen the problem. That being the case, I didn't pursue the
issue.
I still think there's a bug because the code I had previously, IMHO,
should have
On Jun 12, 2008, at 6:39 AM, Michael McCandless wrote:
Hi Grant,
My stress test is unable to reproduce this exception, either. I'm
adding Wikipedia docs to an index, using a high merge factor, then
opening a new writer with low merge factor (5) and calling
optimize. This forces concur
Hi Grant,
My stress test is unable to reproduce this exception, either. I'm
adding Wikipedia docs to an index, using a high merge factor, then
opening a new writer with low merge factor (5) and calling optimize.
This forces concurrent merges to run during the optimize.
One more questio
On Jun 11, 2008, at 6:00 AM, Michael McCandless wrote:
Grant Ingersoll wrote:
Is more than one thread adding documents to the index?
I don't believe so, but I am trying to reproduce. I've only seen
it once, and don't have a lot of details, other than I noticed it
was on a specific fil
Grant Ingersoll wrote:
Is more than one thread adding documents to the index?
I don't believe so, but I am trying to reproduce. I've only seen
it once, and don't have a lot of details, other than I noticed it
was on a specific file (.fdt) and was wondering if that was a
factor or not.
a timing thing, but it might be
interesting if it consistently occurred in the same spot.
Thanks,
Grant
On May 29, 2008, at 7:43 PM, Paul J. Lucas wrote:
I occasionally get a FileNotFoundException like:
Exception in thread "Thread-44" org.apache.lucene.index.MergePolicy
$Merge
thread running and it may just be a timing thing, but it might be
interesting if it consistently occurred in the same spot.
Thanks,
Grant
On May 29, 2008, at 7:43 PM, Paul J. Lucas wrote:
I occasionally get a FileNotFoundException like:
Exception in thread "Thread-44" org.apache
occurred in the same spot.
Thanks,
Grant
On May 29, 2008, at 7:43 PM, Paul J. Lucas wrote:
I occasionally get a FileNotFoundException like:
Exception in thread "Thread-44" org.apache.lucene.index.MergePolicy
$MergeException: java.io.FileNotFoundException: /Stuff/Caches/
AuroraSuppor
Paul,
How often does your process start up? Are you really sure that there
can never be two instances of your process running? If/when you
gather the infoStream logs running up to this exception, can you also
log when IndexReader.unLock is called?
Two writers on the same index can defi
OK.
What is your mergeFactor?
Mike
Paul J. Lucas wrote:
On May 30, 2008, at 5:59 PM, Michael McCandless wrote:
One more question: when you hit that exception, does the offending
file in fact not exist (when you list the directory yourself)?
Yes, the file does not exist.
And, does the e
On May 30, 2008, at 5:59 PM, Michael McCandless wrote:
One more question: when you hit that exception, does the offending
file in fact not exist (when you list the directory yourself)?
Yes, the file does not exist.
And, does the exception keep happening consistently (same file
missing) onc
Paul,
One more question: when you hit that exception, does the offending
file in fact not exist (when you list the directory yourself)?
And, does the exception keep happening consistently (same file
missing) once that happens, or, does the same index work fine the
next time you try it (i
Paul,
What is your mergeFactor set to?
Can you get the exception to happen with infoStream set on the
writer, and post that back?
Mike
Paul J. Lucas wrote:
On May 30, 2008, at 3:05 AM, Michael McCandless wrote:
Are you indexing only one document each time you open
IndexWriter? Or do
Paul J. Lucas wrote:
On May 30, 2008, at 3:05 AM, Michael McCandless wrote:
Are you indexing only one document each time you open IndexWriter?
Or do you open a single IndexWriter, add all documents for that
directory, then close it?
The latter.
When the exception occurs, do you know how ma
On May 30, 2008, at 3:05 AM, Michael McCandless wrote:
Are you indexing only one document each time you open IndexWriter?
Or do you open a single IndexWriter, add all documents for that
directory, then close it?
The latter.
When the exception occurs, do you know how many simultaneous thre
Jamie,
The code looks better! You're not forcefully removing the write.lock
nor deleting files from the index yourself, anymore, which is good.
One thing I spotted is your VolumeIndex.deleteIndex method fails to
synchronize on the indexLock. If I understand the code correctly,
that mea
I guess my test index was corrupted some other way...I can not duplicate
my results today without breaking things with two lockless Writers
first. Oh well.
I definitely saw it legitimately while playing with
IndexReader.reopen...if I kept enough of the old IndexReaders around
long enough I wo
Hi Michael / others
The one thing I discovered was that it is quite useful to implement a
JVM shutdown hook in your code to prevent the index from getting
corrupted when an indexing process dies unexpectantly.
For those who don't know about shutdown hook mechanism, you do this by
implementin
Hi Michael
Thank you. Your suggestions were great and they were implemented (see
attached source code), however, unfortunately, I am still getting file
not found errors on the automatic merging of indexes.
Regards,
Jamie
Michael McCandless wrote:
Jamie,
I'd love to get to the root cause
A few more questions, below:
Paul J. Lucas wrote:
I have a thread than handles the unindexing/reindexing. It gets
changed from a BlockingQueue. My unindex code is like:
IndexWriter writer = new IndexWriter( INDEX, INDEX_ANALYZER, false );
final Term t = new Term( DIR_FIELD
Paul J. Lucas wrote:
On May 29, 2008, at 6:35 PM, Michael McCandless wrote:
Can you use lsof (or something similar) to see how many files you
have?
FYI: I personally can't reproduce this; only a coworker can and
even then it's sporadic, so it could take a little while.
If possible, cou
Jamie,
I'd love to get to the root cause of your exception.
Last time we talked (a few weeks back) I saw several possible causes
in the source you had posted:
http://markmail.org/message/dqovvcwgwof5f7wl
Did you test any of the ideas there? You are potentially manually
deleting file
Hi Paul,
I just noticed the discussion around this.
All most all of my customers have/are experiencing the intermittant
FileNotFound problem.
Our software uses Lucene 2.3.1. I have just upgraded to Lucene 2.3.2 in
the hope that this was one of the bugs that was fixed.
I would be very inter
On May 29, 2008, at 6:35 PM, Michael McCandless wrote:
Can you use lsof (or something similar) to see how many files you
have?
FYI: I personally can't reproduce this; only a coworker can and even
then it's sporadic, so it could take a little while.
Merging, especially several running at o
Forgot to mention...keep trying if you get read past file exception...I
get that sometimes too.
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
Michael McCandless wrote:
Michael Busch wrote:
Of course it can happen that you run out of available file
descriptors when a lot of threads open separate IndexReaders, and
then the SegmentMerger could certainly hit IOExceptions, but I don't
think a FileNotFoundException would be thro
On May 29, 2008, at 6:26 PM, Michael McCandless wrote:
Paul J. Lucas wrote:
if ( IndexReader.isLocked( INDEX ) )
IndexReader.unlock( INDEX );
The isLocked()/unlock() is because sometimes the server process
gets killed and leaves teh indexed locked.
This makes me a bit
On May 29, 2008, at 5:57 PM, Mark Miller wrote:
Paul J. Lucas wrote:
Are you saying that using multiple IndexSearchers will definitely
cause the problem I am experiencing and so the suggestion that
using a single IndexSearcher for optimaztion only is wrong?
Will it definitely cause your p
Michael Busch wrote:
Of course it can happen that you run out of available file
descriptors when a lot of threads open separate IndexReaders, and
then the SegmentMerger could certainly hit IOExceptions, but I
don't think a FileNotFoundException would be thrown in such a case.
I
Paul J. Lucas wrote:
if ( IndexReader.isLocked( INDEX ) )
IndexReader.unlock( INDEX );
The isLocked()/unlock() is because sometimes the server process
gets killed and leaves teh indexed locked.
This makes me a bit nervous. Does this only run on startup of your
proces
normally unneeded index files...get
enough of this going on, and even with the compound file format you
can get too many files open and files missing FileNotFound exceptions.
I disagree, Mark. An IndexWriter should never hit a
FileNotFoundException. If Lucene is being used correctly in
index files...get enough of
this going on, and even with the compound file format you can get too
many files open and files missing FileNotFound exceptions.
I disagree, Mark. An IndexWriter should never hit a
FileNotFoundException. If Lucene is being used correctly in Paul's
system, i.
Paul J. Lucas wrote:
On May 29, 2008, at 5:18 PM, Mark Miller wrote:
It looks to me like you are not sharing an IndexSearcher across threads.
My reading of the documentation says that doing so is an optimization
only and not a requirement.
Are you saying that using multiple IndexSearchers
On May 29, 2008, at 5:18 PM, Mark Miller wrote:
It looks to me like you are not sharing an IndexSearcher across
threads.
My reading of the documentation says that doing so is an optimization
only and not a requirement.
Are you saying that using multiple IndexSearchers will definitely
ca
Paul J. Lucas wrote:
I occasionally get a FileNotFoundException like:
Exception in thread "Thread-44"
org.apache.lucene.index.MergePolicy$MergeException:
java.io.FileNotFoundException:
/Stuff/Caches/AuroraSupport/IM_IndexCache/INDEX/_27.cfs (No such file
or di
I occasionally get a FileNotFoundException like:
Exception in thread "Thread-44" org.apache.lucene.index.MergePolicy
$MergeException: java.io.FileNotFoundException: /Stuff/Caches/
AuroraSupport/IM_IndexCache/INDEX/_27.cfs (No such file or dire
It would make me nervous to have Lucene insert that shutdown hook.
EG closing the IndexWriter could in general be a time-consuming
process. But if it's working for you, that's great. Though, if you
explicitly kill the JVM (eg kill -9) those shutdown hooks won't run.
You should use org.
Hi Michael
I had in fact preempted you and moved the delete lock code to a startup
function. However, I found a nice little optimization that seems to
force the writer to close when the process is manually killed. I added a
JVM shutdown hook (i.e. using Runtime.getRuntime().addShutdownHook(thi
OK, that sounds like a legitimate reason to forcibly remove the write
lock, but it would be better to do that only on startup of your
process rather than in every openIndex() call.
If ever you hit LockObtainFailedException in openIndex, even after
having deleted the write lock on startup,
Hi Mike
Thanks for the suggestions. I've implemented all of them. The main
reason why I manually deleted the lock file was that sometimes users
kill the server process manually or there is a hard reboot without any
warning. In such circumstances, Lucene leaves a lock file lying around
as it w
Subject: Serious Index Corruption Error - FileNotFoundException
: References: <[EMAIL PROTECTED]>
: <[EMAIL PROTECTED]>
: <[EMAIL PROTECTED]>
: <[EMAIL PROTECTED]>
: In-Reply-To: <[EMAIL PROTECTED]>
-Hoss
--
On quickly looking through the code I think there are some serious
hazards that could lead to this exception.
First, in your openIndex code, if you hit a LockObtainFailedException
in trying to open your writer, you are forcefully removing the write
lock and then retrying. Yet, you also o
Hi there
It appears my Lucene 2.3.1 index is corrupted. I get the following error
when searching:
/mnt/indexnew/_3wk0.cfs (No such file or directory)
java.io.FileNotFoundException: /mnt/indexnew/_3wk0.cfs (No such file or
directory)
at java.io.RandomAccessFile.open(Native Method)
s in QueryParser.
Regards,
Paul Elschot
>
> -Rico
>
> Original-Nachricht
> Datum: Mon, 30 Apr 2007 15:08:14 -0700
> Von: "Mike Klaas" <[EMAIL PROTECTED]>
> An: java-user@lucene.apache.org
> Betreff: Re: Re: How to index a lot of fields (with
: However, it does not look like upgrading is an option, so I wonder if my
: current approach of mapping a property that a client app creates to one
: field name is workable at all. Maybe I have to introduce some sort of
: mapping of client properties to a fixed number of indexable fields.
:
: ...
: How to index a lot of fields (without FileNotFoundException:
Too many open files)
> On 4/30/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> > Thanks for you reply.
> >
> > We are still using Lucene v1.4.3 and I'm not sure if upgrading is an
> option. Is
On 4/30/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
Thanks for you reply.
We are still using Lucene v1.4.3 and I'm not sure if upgrading is an option. Is
there another way of disabling length normalization/document boosts to get rid
of those files?
Why not raise the limit of open files
Thanks for you reply.
We are still using Lucene v1.4.3 and I'm not sure if upgrading is an option. Is
there another way of disabling length normalization/document boosts to get rid
of those files?
Thanks,
Rico
: >From what I read in the Lucene docs, these .f files store the
: normalization fac
Just in case norms info cannot be spared, note that since Lucene 2.1 norms
are maintained in a single file, no matter how many fields there are.
However due to a bug in 2.1 this did not prevent the too many open files
problem. This bug was already fixed but not yet released. For more details
on th
: >From what I read in the Lucene docs, these .f files store the
: normalization factor for the corresponding field. What exactly is this
: used for and more importantly, can this be disabled so that the files
: are not created in the first place?
field norms are primarily used for length normali
my application breaks with: FileNotFoundException: Too many open
files.
I searched this list and it seems like others had this problem before, but I
could not find a solution.
>From what I read in the Lucene docs, these .f files store the normalization
>factor for the corresponding field
"Antony Bowesman" <[EMAIL PROTECTED]> wrote:
> Michael McCandless wrote:
>
> >> Yes, I've disabled it currently while the new test runs. Let's see.
> >> I'll re-run the test a few more times and see if I can re-create the
> >> problem.
> >
> > OK let's see if that makes it go away! Hopefully
Michael McCandless wrote:
Yes, I've disabled it currently while the new test runs. Let's see.
I'll re-run the test a few more times and see if I can re-create the problem.
OK let's see if that makes it go away! Hopefully :)
I ran the tests several times over the weekend with no virus check
"Antony Bowesman" <[EMAIL PROTECTED]> wrote:
> Michael McCandless wrote:
> >
> > Hmmm. It seems like what's happening is the file in fact exists but
> > Lucene gets "Access is denied" when trying to read it. Lucene takes a
> > listing of the directory, first. So if it Lucene has permission to
Michael McCandless wrote:
Hmmm. It seems like what's happening is the file in fact exists but
Lucene gets "Access is denied" when trying to read it. Lucene takes a
listing of the directory, first. So if it Lucene has permission to
take a directory listing but then no permission to open the se
"Antony Bowesman" <[EMAIL PROTECTED]> wrote:
> I got the following exception this morning when running one last test on a
> data
> set that has been indexed many times before over the past few months.
>
> java.io.FileNotFoundException:
> D:\72ed1\server\Java\Search\0008\index\0001\segment
I got the following exception this morning when running one last test on a data
set that has been indexed many times before over the past few months.
java.io.FileNotFoundException:
D:\72ed1\server\Java\Search\0008\index\0001\segments_gq9 (Access is denied)
at java.io.RandomAcce
Yes, I use default settings.
Cheers,
Hes.
On 10/5/06, Michael McCandless <[EMAIL PROTECTED]> wrote:
Hes Siemelink wrote:
> Not making much progress, but there is one thing I found curious: very
> often
> the file that can not be found is "_8km.fnm".
> Is it possible to derive any informatio
Hes Siemelink wrote:
Not making much progress, but there is one thing I found curious: very
often
the file that can not be found is "_8km.fnm".
Is it possible to derive any information from this?
Hmmm, that's interesting. Segment numbers are just integers encoded
in base 36, ie, using the dig
Not making much progress, but there is one thing I found curious: very often
the file that can not be found is "_8km.fnm".
Is it possible to derive any information from this?
Cheers,
Hes.
1 - 100 of 140 matches
Mail list logo