Hello,
the body from my previous mail was filtered out...
we need to migrate our lucene 5.5 indexes to version 8.11.1. fortunately i
found the IndexUpgrader class which i didn't know yet.
i tried to migrate from major version to major version.
so i did
java -cp lucene-core-6.6.6.jar;lucene-b
C:\workspaces\workspaceecm5\LuceneIndexUpgrade\lb>java -cp
lucene-core-6.6.6.jar;lucene-backward-codecs-6.6.6.jar
org.apache.lucene.index.IndexUpgrader -delete-prior-commits -verbose
"V:\\LuceneMigration\\5"
IFD 0 [2022-01-12T14:02:35.479Z; main]: init: current segments file is
"segments_8z";
we use highlighter to get textfragments for our hit list.
the code is straight forward like this
Analyzer analyzer = new StandardAnalyzer(;
QueryParser parser = new QueryParser( "content", analyzer);
Highlighter highlighter = new Highlighter(new
QueryScorer(parser.parse(pQuery)));
twoch, den 04.05.2016, 10:38 +0200 schrieb Sascha Janz:
> hi,
>
> i want to migrate our code from 4.6 to 5.5.
>
> We used a FieldCacheRangeFilter but this no longer exists in Version 5. And
> DocValuesRangeFilter does not exist anymore in 5.5.0
>
> so what c
hi,
i want to migrate our code from 4.6 to 5.5.
We used a FieldCacheRangeFilter but this no longer exists in Version 5. And
DocValuesRangeFilter does not exist anymore in 5.5.0
so what could i use?
greetings
Sascha
-
To
r.parse("Dokument");
TopDocs docs = searcher.search(q,100);
System.out.println("anzahl treffer = 5 ? " + docs.scoreDocs.length);
FieldCacheRangeFilter ffilter = FieldCacheRangeFilter.newLongRange("id",
1L, 3l, true , true);
FilteredQuery fq3=new FilteredQuery(q,ffilter);
docs = searcher.
Hi,
iam using lucene 4.6.0.
can i use a FieldCacheRangeFilter on a docvalue field?
like
FieldCacheRangeFilter ffilter =
FieldCacheRangeFilter.newLongRange("dv_id", 0L, Long.MAX_VALUE, true , true);
where dv_id is s NumericDocValue
regards
Sascha
---
hello,
we must make a design decision for our system. we have many customers wich all
should use the same server. now we are thinking about to make a separate lucene
index for each customer, or to make one large index and use a filter for each
customer.
any suggestions, comments or expierences
Hi,
we use lucene 4.6 in our project. we got some perfomamce problems with
IndexSearcher.doc(int docID, SetfieldsToLoad). i found this issue
https://issues.apache.org/jira/browse/LUCENE-6322
and may be that is our problem. is it possible to patch lucene 4.X with new
source of CompressingStore
i use TermsQuery for creating a join query. the list of terms could be quite
large. e.g. million entries.
when this is the case, the IntroSorter sorting the terms becomes a performance
bottleneck.
could i use an other strategy or algorithm for building those joins on large
sets of terms?
an
Query with OR
Yep, that looks good to me.
--
Ian.
On Tue, Feb 10, 2015 at 5:01 PM, Sascha Janz wrote:
> hm, already thought this could be the solution but didn't know how to do the
> or Operation
>
> so i tried this
>
> BooleanQuery bquery = new Boolean
endet: Dienstag, 10. Februar 2015 um 17:31 Uhr
Von: "Ian Lea"
An: java-user@lucene.apache.org
Betreff: Re: combine to MultiTermQuery with OR
org.apache.lucene.search.BooleanQuery.
--
Ian.
On Tue, Feb 10, 2015 at 3:28 PM, Sascha Janz wrote:
>
> Hi,
>
> i want to combine tw
Hi,
i want to combine two MultiTermQueries.
One searches over FieldA, one over FieldB. Both queries should be combined
with "OR" operator.
so in lucene Syntax i want to search
FieldA:Term1 OR FieldB:Term1, FieldA:Term2 OR FieldB:Term2, FieldA:Term3 OR
FieldB:Term3...
how can i do t
as an IntField.
--
Ian.
On Wed, Jan 14, 2015 at 2:07 PM, Sascha Janz wrote:
>
> hello,
>
> i am using lucene 4.6. in my query i use a collector to get field values.
>
> setNextReader is implemented as below.
>
> public void setNextReader(AtomicRe
hello,
i am using lucene 4.6. in my query i use a collector to get field values.
setNextReader is implemented as below.
public void setNextReader(AtomicReaderContext context) throws IOException {
cacheIDs = FieldCache.DEFAULT.getInts(context.reader(), "id", true);
}
and collect
public
Hi,
is there a chance to add a additional clause to a query for a field that
should not be null ?
greetings
sascha
Hi,
is there a solution to build suggestions based on a query?
Greetings
Sascha
merged in-memory, and
> your NRT reopens don't result in flushing new segments to disk.
>
> Shai
>
>
> On Thu, Aug 7, 2014 at 1:14 PM, Sascha Janz wrote:
>
>> hi,
>>
>> i try to speed up our indexing process. we use SeacherManager with
>> applydeletes
many thanks again. this was a good tip.
after switching from FSDirectory to NRTCachingDirectory queries run at double
speed.
Sascha
Gesendet: Donnerstag, 07. August 2014 um 14:54 Uhr
Von: "Sascha Janz"
An: java-user@lucene.apache.org
Betreff: Aw: Re: improve indexing
tiny segments are merged in-memory, and
your NRT reopens don't result in flushing new segments to disk.
Shai
On Thu, Aug 7, 2014 at 1:14 PM, Sascha Janz wrote:
> hi,
>
> i try to speed up our indexing process. we use SeacherManager with
> applydeletes to get near real time Rea
hi,
i try to speed up our indexing process. we use SeacherManager with applydeletes
to get near real time Reader.
we have not really "much" incoming documents, but the documents must be updated
from time to time and the amount of documents to be updated could be quite
large.
i tried some tes
fast.
Uwe
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de
> -Original Message-
> From: Sascha Janz [mailto:sascha.j...@gmx.net]
> Sent: Wednesday, August 06, 2014 5:57 PM
> To: java-user@lucene.apache.org
> Subject
we Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de
> -----Original Message-
> From: Sascha Janz [mailto:sascha.j...@gmx.net]
> Sent: Wednesday, August 06, 2014 10:27 AM
> To: java-user@lucene.apache.org
> Subject: Aw: Re: Performa
eff: Re: Performance StringCoding.decode
how to monitor? use jprofile?
From: Sascha Janz
Date: 2014-08-05 22:36
To: java-user@lucene.apache.org
Subject: Performance StringCoding.decode
hi,
i want to speed up our search performance. so i run test and monitor them with
java mission control.
the analysis showed
hi,
i want to speed up our search performance. so i run test and monitor them with
java mission control.
the analysis showed that one hotspot is
sun.nio.cs.UTF_8$Decoder.decode(byte[], int, int, char[])
- java.lang.StringCoding.decode(Charset, byte[], int, int)
- java.lang.String.(byte[],
we use lucene to search in hierarchical structures. like a folder structure in
filesystem.
the documents have an extra field, which specifies the location of the document.
so if you want to search documents under a specific folder you have to query a
prefix in this field.
but if the docume
ommit is very costly,
> and it's only needed for recovery (so you know which docs are in the
> index if the machine/OS crashes).
>
> Mike McCandless
>
> http://blog.mikemccandless.com[http://blog.mikemccandless.com]
>
>
> On Sat, May 3, 2014 at 11:46 AM, Sascha Janz wrote:
&g
Hi,
We use lucene 4.6, our application receives continuously new documents.
Mostly emails. We need the update near real time, so we open the IndexReader
with Directory.open and IndexWriter.
Periodically we do a commit, e.g. every 200 documents.
We used to close the IndexWriter on commi
28 matches
Mail list logo