Thanks Mike for your answer.
I do have change the way to index numeric fields :
doc.add(new DoublePoint(name, (Double) value));
doc.add(new StoredField(name, (Double) value));
But it's about old values in my indexes. I was supposing the migration tool
would do the same to preserve the indexation
I use the macmini on NFS server side. It seems mount option sync is
useless, just slows down the index program.
On Fri, Sep 23, 2016 at 4:43 AM, Michael McCandless <
luc...@mikemccandless.com> wrote:
> OK sorry I meant your first index, and it seems to have only one
> (broken) segments file. Ca
OK sorry I meant your first index, and it seems to have only one
(broken) segments file. Can you post the "ls -l" output of that first
index? It looks like the file was (illegally) filled with 0s, or at
least the first 4 bytes were.
Lucene writes this file, fsyncs it, does an atomic rename, and
You need to change how you index the documents to add e.g. IntPoint,
so that points are actually indexed.
Mike McCandless
http://blog.mikemccandless.com
On Thu, Sep 22, 2016 at 11:01 AM, Ludovic Bertin
wrote:
> Hi,
>
> I have an index with some stored and indexed numeric fields.
> After the mi
LegacyNumericUtils is the right solution for your index for now, but
longer term you should migrate to dimensional points instead, which
are a more efficient way to index and range search numerics.
But: why do you need all distinct values of a field? In general this
is a very dangerous method to
second index is recovered by checkIndex, I don't know what are in second
index directory before recover.
checkIndex can't read first index. index filenames are attached.
I use lucene6.0.0 at the beginning, then I upgrade to lucene6.1.0 to
continue index.
On Thu, Sep 22, 2016 at 10:17 PM, Michael M
Hi there,
I'm migrating an application from Lucene 4.7.0 to Lucene 6.0.1.
I'm facing a problem with this piece of code :
public List getDistinctValues(IndexReader reader, EventField field)
throws IOException {
List values = new ArrayList();
Fields fields = MultiFields.getFields(reader);
Hi,
I have an index with some stored and indexed numeric fields.
After the migration, I can still see the numeric fields stored into my
documents,
But I was expecting to have those fields indexed as point values (see
https://lucene.apache.org/core/6_2_1/core/org/apache/lucene/index/PointValues.h
I will be replying to my own question:
Looking at the source code of KeywordAnalyzer, I noticed was not
lowercasing the indexed fields, and index did not contain lowercased
letters anyway, so I thought that query parser was responsible for this.
And again looking at source code of QueryParser I f
Hi,
I have an index with some stored and indexed numeric fields.
After the migration, I can still see the numeric fields stored into my
documents,
But I was expecting to have those fields indexed as point values (see
https://lucene.apache.org/core/6_2_1/core/org/apache/lucene/index/PointValues.h
Do you have 2 separate segments files in that 2nd index?
Which exact Lucene version is this?
Mike McCandless
http://blog.mikemccandless.com
On Thu, Sep 22, 2016 at 7:44 AM, Ziming Dong wrote:
> I used checkIndex to recover second index though I lost many docs in index,
> but first index can't
Hello,
I am indexing userAgent fields found in apache logs. Indexing and querying
everything with
KeywordAnalyzer - But I found something strange:
IndexSearcher searcher = new IndexSearcher(reader);
Analyzer q_analyzer = new KeywordAnalyzer();
QueryParser pars
I used checkIndex to recover second index though I lost many docs in index,
but first index can't be read by checkIndex, error is
java -cp lucene-core-6.1.0.jar -ea:org.apache.lucene...
> org.apache.lucene.index.CheckIndex /Volumes/HPT8_56T/infomall-index/index0
> Opening index @ /Volumes/HPT8_56T
Hmm I'm no longer so sure this is an IW bug: on commit we fsync the
pending_segments_N and then do an atomic rename to segments_N.
Can you describe your IO system? Is it possible it does not implement
fsync or atomic renames correctly?
Also, your 2nd exception indices the segments_N file was int
Sorry for the slow reply here. Curious that both of these exceptions
are from IW.init. I think this may be a real bug, caused by this:
https://github.com/apache/lucene-solr/commit/981bfba841144d08df1d1a183d39fcd6f195ad56
I'll see if I can make a standalone test case showing this.
If you open th
TermVector disappeared from Lucene 6.2. What can I use instead?
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-can-I-list-all-the-terms-from-a-document-tp4294797p4297372.html
Sent from the Lucene - Java Users mailing list archive at Nabble.com.
16 matches
Mail list logo