Got it,
I don´t have a clue if this corruption was caused by hardware failure,
but that is possible because we suffer with a lot of power failures from
time to time. But the thing is that I´ve been using lucene for a long time
and I never got this kind of exception.
The thing is that I´d l
On 7/24/07, Rafael Rossini <[EMAIL PROTECTED]> wrote:
I did a litle debug and found that in the TermScorer, the byte[] norms has
size = 1.119.933, wich is the number of docs on my index, and there is a
docID = 1226511, that is if the "doc" variable in the method is the docID.
I tried to access t
I did a litle debug and found that in the TermScorer, the byte[] norms has
size = 1.119.933, wich is the number of docs on my index, and there is a
docID = 1226511, that is if the "doc" variable in the method is the docID.
I tried to access this document with reader.document() and got a *
java.io
I don´t know the exact date of the build, but it is certainly before July 4,
and before the LUCENE-843 patch was committed. My index has 1.119.934 docs
on it and is about 8.2G.
I really don´t know how to reproduce this, the only query that I get this
error, so far, is "brasil"... and I don´t know
That looks spooky. It looks like either the norms array is not
large enough or that docID is too large. Do you know how many
docs you have in your index?
Is this easy to reproduce, maybe on a smaller index?
There was a very large change recently (LUCENE-843) to speed
up indexing and it's possi
Hello all,
I´m using solr in an app, but I´m getting an error that it might be a lucene
problem. When I perform a simple query like q = brasil I´m getting this
exception:
java.lang.ArrayIndexOutOfBoundsException: 1226511
at org.apache.lucene.search.TermScorer.score(TermScorer.java:74)
at org