On Wed, Jul 23, 2014 at 6:03 AM, Harald Kirsch <harald.kir...@raytion.com> wrote: > Hi, > > below is an exception I get from one Solr core. According to > https://issues.apache.org/jira/browse/LUCENE-5617 the check that leads to > the exception was introduced recently. > > Two things are worth mentioning: > > a) contrary to the expectation expressed in the message (file truncated?), > the actual file length is *greater* than the expected length. > > b) the actual length is 2966208512=0xB0CC_C000 which looks like rounded up > to a page size of 4096 bytes. > > The code leading to the exception is > (http://svn.apache.org/viewvc/lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/codecs/compressing/CompressingStoredFieldsReader.java?view=markup&pathrev=1592731) > > if (maxPointer + CodecUtil.footerLength() != fieldsStream.length()) { > throw new CorruptIndexException("Invalid fieldsStream maxPointer (file > truncated?): maxPointer=" + maxPointer + ", length=" + > fieldsStream.length()); > } > > > Without delving further into the code, can anyone comment on the chance this > is actually a bug? Should the test possibly be '<' instead of '!=' because > fieldsStream.length() is an OS-backed size that may report the fully > allocated disk space rounded upwards to 4k pages? (Just guessing?)
Its quite clear what length() should return, and thats the exact size of the file. Somehow, unfortunately your index became corrupt. If you don't mind, can you describe a little of the environment your machine is running on? particularly operating system version, filesystem type, and the output of 'java -version' --------------------------------------------------------------------- To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org For additional commands, e-mail: java-user-h...@lucene.apache.org