Thank you, I indeed use newer version of Lucli by mistake.
-Original Message-
From: Michael McCandless [mailto:[EMAIL PROTECTED]
Sent: Thursday, November 29, 2007 6:30 PM
To: java-user@lucene.apache.org
Subject: Re: CorruptIndexException
That exception means your index was written with
I have PPC and Intel access if that helps. Just need a test case.
On Nov 29, 2007, at 5:37 PM, Michael McCandless wrote:
"Bill Janssen" <[EMAIL PROTECTED]> wrote:
No. It's in another location, but perhaps I can get it tomorrow.
On the other hand, the success when using 2.0 makes it likely t
"Grant Ingersoll" <[EMAIL PROTECTED]> wrote:
> Just a theory (make that a guess), Mike, but is it possible that the
> one merge scheduler is hitting a synchronization issue with the
> deletedDocuments bit vector? That is one thread is cleaning it up and
> the other is accessing and they are
"Bill Janssen" <[EMAIL PROTECTED]> wrote:
> No. It's in another location, but perhaps I can get it tomorrow.
> On the other hand, the success when using 2.0 makes it likely to me
> that the machine isn't the problem.
Yeah good point. Seems like a long shot (wishful thinking on my
part!).
Your
Just a theory (make that a guess), Mike, but is it possible that the
one merge scheduler is hitting a synchronization issue with the
deletedDocuments bit vector? That is one thread is cleaning it up and
the other is accessing and they aren't synchronizing their access?
This doesn't explain
This is in the nightly JAR. It's o.a.l.index.CheckIndex (it defines
a static main).
Mike
"Bill Janssen" <[EMAIL PROTECTED]> wrote:
> > Also, could you try out the CheckIndex tool in 2.3-dev before and
> > after the deletes?
>
> Great idea! I don't suppose there's a jar file of it?
>
> Bill
So, it's a little clearer. I get the Array-out-of-bounds exception if
I'm re-indexing some already indexed documents -- if there are
deletions involved. I get the CorruptIndexException if I'm indexing
freshly -- no deletions. Here's an example of that (with the latest
nightly). I removed the ex
> Also, could you try out the CheckIndex tool in 2.3-dev before and
> after the deletes?
Great idea! I don't suppose there's a jar file of it?
Bill
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail:
> Have you tried another PPC machine?
No. It's in another location, but perhaps I can get it tomorrow. On
the other hand, the success when using 2.0 makes it likely to me that
the machine isn't the problem.
OK, I've reverted to my original codebase (where I first create a
reader and do the dele
> Could you post this part of the code (deleting) too?
Here it is:
private static void remove (File index_file, String[] doc_ids, int start) {
String number;
String list;
Term term;
TermDocs matches;
if (debug_mode)
System.err.println("in
Boosting a one term query does not have an affect on the score.
For example:
apple
Has the same score as:
apple^3
But repeating the term will up the score
apple apple apple
I expected the score to go up when boosting a one term query. Is that a
wrong expectation?
Thanks!
--
View this messag
On Nov 29, 2007, at 2:26 PM, Bill Janssen wrote:
Are you still getting the original exception too or just the Array
out =20=
of bounds one now? Also, are you doing anything else to the index
=20
while this is happening? The code at the point in the exception
below =20=
is trying to p
"Bill Janssen" <[EMAIL PROTECTED]> wrote:
> Here's the dump with last night's build:
Those logs look healthy up until the exception.
One odd thing is when you instantiate your writer, your index has 2
segments in it. I expected only 1 since each time you visit your
index you leave it optimized.
> Are you still getting the original exception too or just the Array out =20=
>
> of bounds one now? Also, are you doing anything else to the index =20
> while this is happening? The code at the point in the exception below =20=
>
> is trying to properly handle deleted documents.
Just the arra
Are you still getting the original exception too or just the Array out
of bounds one now? Also, are you doing anything else to the index
while this is happening? The code at the point in the exception below
is trying to properly handle deleted documents.
-Grant
On Nov 29, 2007, at 1:34 P
> Can you try running with the trunk version of Lucene (2.3-dev) and see
> if the error still occurs? EG you can download this AM's build here:
>
>
> http://lucene.zones.apache.org:8080/hudson/job/Lucene-Nightly/288/artifact/artifacts
Still there. Here's the dump with last night's build:
/L
> > Another thing to try is turning on the infoStream
> > (IndexWriter.setInfoStream(...)) and capture & post the resulting log.
> > It will be very large since it takes quite a while for the error to
> > occur...
>
> I can do that.
Here's a more complete dump. I've modified the code so that I n
> > Another thing to try is turning on the infoStream
> > (IndexWriter.setInfoStream(...)) and capture & post the resulting log.
> > It will be very large since it takes quite a while for the error to
> > occur...
>
> I can do that.
Here's what I see:
Optimizing...
merging segments _ram_a (1 doc
> Do you have another PPC machine to reproduce this on? (To rule out
> bad RAM/hard-drive on the first one).
I'll dig up an old laptop and try it there.
> Another thing to try is turning on the infoStream
> (IndexWriter.setInfoStream(...)) and capture & post the resulting log.
> It will be very
I'm confused about what's going on here. Could you post the raw java
code that produces this error?
Best
Erick
On Nov 29, 2007 5:32 AM, ing.sashaa <[EMAIL PROTECTED]> wrote:
> Hi all,
> I'm using a program that use the Lucene Library. I've downloaded
> lucene-core-2.2.0.jar file and I'm trying i
Yes, you just call "close()" method.
But, why would you like to do that?
The performance tips remarks exactly the opposite, keeping it alive as
long as possible favors internal lucene's caching of terms, query and
other internal objects.
On Nov 29, 2007 11:14 AM, Sebastin <[EMAIL PROTECTED]> wrot
I had the same issue, and end up doing my own reference counting using
"acquire/release" strategy.
I used a single instance per searcher, every "acquire" counts +1 and
every "release" count -1, when a index is switched it receives a
"dispose" signal, then the release checks if there are processing
Hi,
My application needs to close/open the index searcher periodically so that
newly added documents are visible. Is there a way to determine if there are
any pending searches running against an index searcher or do I have to do my
own reference counting? Thank you.
_
You run your SpanQuery, and get back the Spans. From there, you need
to load the document (either by reanalyzing the tokens or by using
Term Vectors) and then you just have to setup your window around the
position match. Unfortunately, I don't think there is a better way in
Lucene to get
Hi All,
Is there any possibility to kill the IndexSearcher Object after every
search.
--
View this message in context:
http://www.nabble.com/how-to-kill-IndexSearcher-object-after-every-search-tf4897436.html#a14026451
Sent from the Lucene - Java Users mailing list archive at Nabble.com.
Have a look at the Field.java class and it's constructors. The other
option is to look at what was deprecated on Lucene 1.9 and then look
at Lucene 2.x.
Also, I have some up to date example files of indexing, etc. at http://www.lucenebootcamp.com
(follow the link to the SVN repository) whi
Hi all,
I'm using a program that use the Lucene Library. I've downloaded
lucene-core-2.2.0.jar file and I'm trying it, but I get this error while trying
to index my documents:
Exception in thread "main" java.lang.NoSuchMethodError:
org.apache.lucene.document.Document.add(Lorg/apache/lucene/docum
"Bill Janssen" <[EMAIL PROTECTED]> wrote:
> > Hmmm ... how many chunks of "about 50 pages" do you do before
> > hitting this? Roughly how many docs are in the index when it
> > happens?
>
> Oh, gosh, not sure. I'm guessing it's about half done.
Ugh, OK. If we could boil this down to a smaller
That exception means your index was written with a newer version of
Lucene than the version you are using to open the IndexReader.
It looks like you used the unreleased (2.3 dev) version of Lucli from
the Lucene trunk and then went back to an older Lucene JAR (maybe 2.2?)
for accessing it? In ge
Which Lucene version do you use?
If it's 2.2, then Field.Keyword, Field.UnIndexed etc. we removed.
You should instead do:
Document doc = new Document();
doc.add(new Field("id", keywords[i], Store.NO, Index.UN_TOKENIZED));
doc.add(new Field("country", unindexed[i], Store.YES,
Index.UN_TOK
i m studying LIA. but there is a problem with code. When i run the code
i get errorsThe errors are related with the use of deprecated
APIs.Kindly suggest me the right APIs and also instructions how to
handle this situation with other code..
package lia.indexing;
import org.apache.lucene.stor
This code generate error, kindly tell me that what parameters will be
use when we use constructors.
Document doc = new Document();
doc.add( Field.Keyword("id", keywords[i]));
doc.add( Field.UnIndexed("country", unindexed[i]));
doc.add(Field.UnStored("contents", unstored[i]));
32 matches
Mail list logo