It looks like your pk_p and pk_c fields aren't indexed -- they just have
doc values.
If you try making them KeywordFields instead (so they're indexed and have
doc values), does it work?
Also, the join module may be overkill for what you're trying to do, since
it looks like you're indexing parent/
Hi,
I am trying to create and run a "join" query using JoinUitl.createJoinQuery()
in Lucene 10.2.0. However, the query returns 0 results. I am attaching my
little test program. Can you please tell me what I am doing wrong?
Thanks a lot,Markos.
import java.io.IOException;
import java.io.File;
imp
I would now like to run the demo program. How can I do that? I see some
> class files under lucene/demo/build/classes/java/main but how do I build
> the full classpath with all the dependencies needed to run the demo
> program? Can anyone help me? Thanks,
>
> S.
>
> > >
> > > ./gradlew
> > > ./gradlew assemble
> > >
> > > I would now like to run the demo program. How can I do that? I see some
> > > class files under lucene/demo/build/classes/java/main but how do I
> build
> > > the full
; class files under lucene/demo/build/classes/java/main but how do I build
> > the full classpath with all the dependencies needed to run the demo
> > program? Can anyone help me? Thanks,
> >
> > S.
> >
-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org
class files under lucene/demo/build/classes/java/main but how do I build
> the full classpath with all the dependencies needed to run the demo
> program? Can anyone help me? Thanks,
>
> S.
>
n I do that? I see some
class files under lucene/demo/build/classes/java/main but how do I build
the full classpath with all the dependencies needed to run the demo
program? Can anyone help me? Thanks,
S.
or such need, where
> Token and Payload class both are not there now?
>
> Regards
> Rajib
>
> -Original Message-
> From: Uwe Schindler
> Sent: 10 February 2023 15:36
> To: java-user@lucene.apache.org
> Subject: Re: Need help for conversion code from Lucene 2.4.0 t
Sent: 10 February 2023 15:36
To: java-user@lucene.apache.org
Subject: Re: Need help for conversion code from Lucene 2.4.0 to 8.11.2
Hi,
the reason for this is that files in Lucene are always write-once. We
never ever change a file after it was written and committed in the
2-phase-commit. If you
file with the new logic.
Regards
Rajib
-Original Message-
From: Uwe Schindler
Sent: 10 February 2023 15:36
To: java-user@lucene.apache.org
Subject: Re: Need help for conversion code from Lucene 2.4.0 to 8.11.2
Hi,
the reason for this is that files in Lucene are always write-once. We
Exception e) {
e.printStackTrace();
}
==
Regards
Rajib
-Original Message-
From: Uwe Schindler
Sent: 06 February 2023 16:46
To: java-user@lucene.apache.org
Subject: Re: Need help for conversion code from Lucene 2.4.0 to 8.11.2
Hi,
Since around Lucene 4 (maybe alre
output.close();
} catch(Exception e) {
e.printStackTrace();
}
==
Regards
Rajib
-Original Message-
From: Uwe Schindler
Sent: 06 February 2023 16:46
To: java-user@lucene.apache.org
Subject: Re: Need help for conversion code f
places.
Currently, this API is not there.
Could you please suggest, how we can handle the API in 8.11.2?
Regards
Rajib
-Original Message-
From: Mikhail Khludnev
Sent: 01 February 2023 12:22
To: java-user@lucene.apache.org
Subject: Re: Need help for conversion code from Lucene 2.4.0 to
.
Currently, this API is not there.
Could you please suggest, how we can handle the API in 8.11.2?
Regards
Rajib
-Original Message-
From: Mikhail Khludnev
Sent: 01 February 2023 12:22
To: java-user@lucene.apache.org
Subject: Re: Need help for conversion code from Lucene 2.4.0 to 8.11.2
ch API to
> use?
>
> Previously there was an API
> IndexReader.deleteDocument(docID)
>
org.apache.lucene.index.IndexWriter#deleteDocuments(org.apache.lucene.index.Term...)
>
> ==
> IndexWritter. addIndexesNoOptimize(FSDirectory[])
> I
37
To: java-user@lucene.apache.org
Subject: RE: Need help for conversion code from Lucene 2.4.0 to 8.11.2
Hi Mikhail,
Thanks for your suggestion. It solved lots of cases today in my end. 😊
I need some more suggestions from your end. I am putting together as below one
b
? If so, can you please help with APIs
===
Regards
Rajib
-Original Message-
From: Mikhail Khludnev
Sent: 29 January 2023 18:05
To: java-user@lucene.apache.org
Subject: Re: Need help for conversion code from Lucene 2.4.0 to 8.11.2
Hello,
You can use
---Original Message-
> From: Mikhail Khludnev
> Sent: 19 January 2023 04:26
> To: java-user@lucene.apache.org
> Subject: Re: Need help for conversion code from Lucene 2.4.0 to 8.11.2
>
> [You don't often get email from m...@apache.org. Learn why this is
> important at
n 8.11.2.
Could you please suggest someway to extract all the Terms with an IndexReader
or some alternative ways?
Regards
Rajib
-Original Message-
From: Mikhail Khludnev
Sent: 19 January 2023 04:26
To: java-user@lucene.apache.org
Subject: Re: Need help for conversion code from Lucene 2
to 8.11.2 for our
> platform code.
> We have used extensively Lucene in our code.
>
> We have replaced several of our code to Lucene 8.11.2 APIs.
>
> But, few places, we are stuck of which New Lucene APIs to use, as not
> getting any suitable match.
>
> Can somebody help me,
match.
Can somebody help me, how we can convert below code using Lucene 2.4.0 to
8.11.2?
ProcessDocs(IndexReader reader, Term t) {
final TermDocs termDocs = reader.termDocs();
termDocs.seek(t);
while (termDocs.next()) {
//Some internal
;
> Currently I am badly required of some examples of using TokenStream,
> tokenAttributes, *Filter.
> I need to replace the uses of "Token".
>
> Could somebody please help me in it?
>
> Regards
> Rajib
>
>
>
--
Sincerely yours
Mikhail Khludnev
https://t.me/MUST_SEARCH
A caveat: Cyrillic!
replace the uses of "Token".
Could somebody please help me in it?
Regards
Rajib
Hello McCoy.
"DocValues", "KnnVectors" and "Postings" are three core principally
different APIs/data structures ie docValues is data column; and postings is
inverted index.
By default codec defines these three formats for all fields. And per-field
wrappers allow configuring separate formats for a p
Hi, Everyone:
I just start studying lucene's source code.
I'm confused about those per-field formats, in this package of "
*org.apache.lucene.codecs.perfield*"
There are many formats in lucene's codec. But there are only 3 per-field
formats: "DocValues", "KnnVectors" and "Postings".
So, what i
is not fixed yet).
> > > > > > Please let us know your Jira/Github usernames if you don't see
> > > > mapping(s)
> > > > > > for your account in this file:
> > > > > >
> > > > > >
> > >
Hi I would like to be added, thanks.
My Github account: freedev
Jira: v.dam...@gmail.com
On Sun, 31 Jul 2022 at 12:09, Michael McCandless
wrote:
> Hello Lucene users, contributors and developers,
>
> If you have used Lucene's Jira and you have a GitHub account as well,
> please check whether y
gt; > >
> > > > Thank You Thank You
> > > > Best regards
> > > >
> > > > From: Michael McCandless
> > > > Sent: Saturday, August 6, 2022 11:29:25 AM
> > > > To: Baris Kazar
hub usernames if you don't see
> > > mapping(s)
> > > > > for your account in this file:
> > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/lucene-jira-archive/blob/main/migration/mappings-data/accou
ified
> > > >
> > > > Tomoko
> > > >
> > > >
> > > > 2022年8月7日(日) 1:36 Baris Kazar :
> > > >
> > > > > Thank You Thank You
> > > > > Best regards
> > > > > _
; > >
> > >
> >
> https://github.com/apache/lucene-jira-archive/blob/main/migration/mappings-data/account-map.csv.20220722.verified
> > >
> > > Tomoko
> > >
> > >
> > > 2022年8月7日(日) 1:36 Baris Kazar :
> > >
> > >
2年8月7日(日) 1:36 Baris Kazar :
> >
> > > Thank You Thank You
> > > Best regards
> > >
> > > From: Michael McCandless
> > > Sent: Saturday, August 6, 2022 11:29:25 AM
> > > To: Baris Kazar
> >
gt;
> > > From: Michael McCandless
> > > Sent: Saturday, August 6, 2022 11:29:25 AM
> > > To: Baris Kazar
> > > Cc: java-user@lucene.apache.org
> > > Subject: Re: [HELP] Link your Apache Lucene Jira and G
azar :
>
> > Thank You Thank You
> > Best regards
> >
> > From: Michael McCandless
> > Sent: Saturday, August 6, 2022 11:29:25 AM
> > To: Baris Kazar
> > Cc: java-user@lucene.apache.org
> > Subject: Re: [HELP] Link your
Cc: java-user@lucene.apache.org
> Subject: Re: [HELP] Link your Apache Lucene Jira and GitHub account ids
> before Thursday August 4 midnight (in your local time)
>
> OK done:
> https://github.com/apache/lucene-jira-archive/commit/13fa4cb46a1a6d609448240e4f66c263da8b3fd1
> &
Thank You Thank You
Best regards
From: Michael McCandless
Sent: Saturday, August 6, 2022 11:29:25 AM
To: Baris Kazar
Cc: java-user@lucene.apache.org
Subject: Re: [HELP] Link your Apache Lucene Jira and GitHub account ids before
Thursday August 4 midnight (in
l McCandless
> *Sent:* Saturday, August 6, 2022 10:12 AM
> *To:* java-user@lucene.apache.org
> *Cc:* Baris Kazar
> *Subject:* Re: [HELP] Link your Apache Lucene Jira and GitHub account ids
> before Thursday August 4 midnight (in your local time)
>
> Thanks Baris,
>
> And
I think so.
Best regards
From: Michael McCandless
Sent: Saturday, August 6, 2022 10:12 AM
To: java-user@lucene.apache.org
Cc: Baris Kazar
Subject: Re: [HELP] Link your Apache Lucene Jira and GitHub account ids before
Thursday August 4 midnight (in your local
22 6:05:51 AM
> To: d...@lucene.apache.org
> Cc: Lucene Users ; java-dev <
> java-...@lucene.apache.org>
> Subject: Re: [HELP] Link your Apache Lucene Jira and GitHub account ids
> before Thursday August 4 midnight (in your local time)
>
> Hi Adam, I added your linked
My github username is bmkazar
can You please register me?
Best regards
From: Michael McCandless
Sent: Saturday, August 6, 2022 6:05:51 AM
To: d...@lucene.apache.org
Cc: Lucene Users ; java-dev
Subject: Re: [HELP] Link your Apache Lucene Jira and GitHub account
Hi Adam, I added your linked accounts here:
https://github.com/apache/lucene-jira-archive/commit/c228cb184c073f4b96cd68d45a000cf390455b7c
And Tomoko added Rushabh's linked accounts here:
https://github.com/apache/lucene-jira-archive/commit/6f9501ec68792c1b287e93770f7a9dfd351b86fb
Keep the linked
Hi,
My mapping is:
JiraName,GitHubAccount,JiraDispName
shahrs87, shahrs87, Rushabh Shah
Thank you Tomoko and Mike for all of your hard work.
On Sun, Jul 31, 2022 at 3:08 AM Michael McCandless <
luc...@mikemccandless.com> wrote:
> Hello Lucene users, contributors and developers,
>
> If you hav
Hi Atri and Christian,
thanks for your reply, we already have your accounts in
-
https://github.com/apache/lucene-jira-archive/blob/7654c0168a86fb05e942666d4514d48966d223bb/migration/mappings-data/account-map.csv.20220722.verified#L42
-
https://github.com/apache/lucene-jira-archive/blob/7654c0168a
Thanks. My mapping is:
cm,cmoen,Christian Moen
On Sun, Jul 31, 2022 at 12:08 PM Michael McCandless <
luc...@mikemccandless.com> wrote:
> Hello Lucene users, contributors and developers,
>
> If you have used Lucene's Jira and you have a GitHub account as well,
> please check whether your user i
Mine is atris for github, atri for JIRA
On Mon, Aug 1, 2022 at 4:03 PM Tomoko Uchida
wrote:
>
> Hi Mike, Marcus, and Praveen:
>
> I verified the added two mappings - these Jira users have activity on
> Lucene Jira, also corresponding GitHub accounts are valid.
> - marcussorealheis
> - pru30
>
> T
Hi Mike, Marcus, and Praveen:
I verified the added two mappings - these Jira users have activity on
Lucene Jira, also corresponding GitHub accounts are valid.
- marcussorealheis
- pru30
Tomoko
2022年8月1日(月) 18:40 Michael McCandless :
> Thanks Praveen,
>
> I added your mapping here:
> https://gi
t's right ;)
>
> Mike
>
> On Sun, Jul 31, 2022 at 7:27 AM 翁剑平 wrote:
>
>> Hi, could you help to add my info, thanks a lot
>> jira full name: jianping weng
>> github id: wjp719
>>
>> the jira issue I create before:
>> https://issues.apache.org/
Thanks, added here:
https://github.com/apache/lucene-jira-archive/commit/d91534e67b35004f212100d73008283327f2f1e7
Please confirm it's right ;)
Mike
On Sun, Jul 31, 2022 at 7:27 AM 翁剑平 wrote:
> Hi, could you help to add my info, thanks a lot
> jira full name: jianping weng
> git
Hi, could you help to add my info, thanks a lot
jira full name: jianping weng
github id: wjp719
the jira issue I create before:
https://issues.apache.org/jira/browse/LUCENE-10425
the github pr I submit before: https://github.com/apache/lucene/pull/780
Best Regards,
jianping weng
Michael
Hello Lucene users, contributors and developers,
If you have used Lucene's Jira and you have a GitHub account as well,
please check whether your user id mapping is in this file:
https://github.com/apache/lucene-jira-archive/blob/main/migration/mappings-data/account-map.csv.20220722.verified
If no
am having hard time in implementing such custom scoring function.
Any help / pointers/ examples / sample code snippets will be
of great help.
Looking forward to hearing from you.
Thank you so much in advance!
Regards,
Lokesh
-
on calculating distance of lat long on the basis of
input params
But I am having hard time in implementing such custom scoring function.
Any help / pointers/ examples / sample code snippets will be
of great help.
Looking forward to hearing from you.
Thank you so much in advance!
Regards,
Lokesh
I have created a custom Collector extending SimpleCollector. I can see the
methods scoreMode() and collect(int doc).
I am seeing that the collect method is invoked by lucene with the child
docId. Am I moving in the right direction?
But to collect the values I would need the Document by using
read
Indeed you shouldn't load all hits, you should register a
org.apache.lucene.search.Collector that will aggregate data while matches
are being collected.
Since you are already using a ToChildBlockJoinQuery, you should be able to
use it in conjunction with utility classes from lucene/facets. Have yo
Hi Adrien,
Thanks for the reply.
I am able to retrieve the child docId's using the .ToChildBlockJoinQuery.
Now for me to do aggregates i need to find the document using
reader.document(int docID) right?. If that is the case won't getting all
the documents would be a costly operation and then fina
It's not straightforward as we don't provide high-level tooling to do this.
You need to use the BitSetProducer that you pass to the
ToParentBlockJoinQuery in order to resolve the range of child doc IDs for a
given parent doc ID (see e.g. how ToChildBlockJoinQuery does it), and then
aggregate over t
Hi Team,
I have a document structure as a customer which itself has few attributes
like gender, location etc.
Each customer will have a list of facts like transaction, product views etc.
I want to do an aggregation of the facts. For example find all customers
who are from a specific location and
During this change I had to change the way I store indexes. This change
results in too many .cfs and .fdt files generated against earlier.
Previously there were 5-7 files in index folder, now it has grown to 40+.
Does it affect having change in the way how indexes are stored internally
with this ch
//Method to create document
private static Document createDocumentTextField(HashMap
fields) {
Document document = new Document();
for (String key : fields.keySet()) {
String val = fields.get(key);
Field f = new TextField(key, val, Field.Store.YES);
Yes, it would be great if you could share code snippets. Maybe it will
help others or maybe someone will have a suggestion to improve or an
alternative.
All the best
Michael
Am 29.04.21 um 14:35 schrieb amitesh116:
Thank you Michael!
I solved this requirement by setting the tokenStream at
Thank you Michael!
I solved this requirement by setting the tokenStream at the field level and
not leaving it to the analyzer. This gives control over altering the full
text before tokenization using custom methods.
This has memory overhead which is handled by writing the documents one at a
time
ne
(probably a committer) who /really understands/ the low-level stuff and
would be able to help. These are serious people (I've looked them up)
and I would expect you would be paid for your time. Please contact me
off-list if you might be able to help.
Cheers
Charlie
P.S. hope to see s
Hi Amitesh
Thanks for the more concrete examples.
Unfortunately I do not know how to solve this better with Lucene itself
in a more general context, but did you ever consider using BERT in
combination with Lucene/Solr
https://blog.google/products/search/search-language-understanding-bert/
ht
Hi Gus, Thank you your reply!
In my search system; users are complaining that they get results with
negation terms when don't expect. As explained in my original post. User
don't want to get documents having a term like "Non Vitamin K" when they
search for "Vitamin K".
But because each terms ar
on't have statistical proof , but I think it doesn't help on mailing
> lists with volunteeers to write "I badly need some help", because it
> seems to me the contrary will happen, that people will not help at all.
>
> I think there are various reasons for this
Hi Amitesh
I don't have statistical proof , but I think it doesn't help on mailing
lists with volunteeers to write "I badly need some help", because it
seems to me the contrary will happen, that people will not help at all.
I think there are various reasons for this
I badly need some help on this one. Someone please give some direction.
Regards
Amitesh
--
Sent from: https://lucene.472066.n3.nabble.com/Lucene-Java-Users-f532864.html
-
To unsubscribe, e-mail: java-user-unsubscr
7;d thought that
flattening the stream meant that no token will have position length >
1... was I wrong? I would greatly appreciate any help with understanding
this.
Thanks!
Nicolás.-
-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org
re's something I'm not understanding here. I'd thought that
flattening the stream meant that no token will have position length >
1... was I wrong? I would greatly appreciate any help with understanding
this.
Thanks!
Nicolás.-
ic field this is what I did:
> doc.add(new IntPoint("niveauexp", 4)); doc.add(new StoredField("niveauexp",
> 4));
>
> Problem:
> when I tried to search "niveauexp" using 4 as search paramet
as search parameter, i recieve
results.
Please help.
EMAIL AND PLEASE DELETE THIS
> E-MAIL FROM YOUR SYSTEM AND DESTROY ANY COPIES.
>
> -Original Message-
> From: Uwe Schindler
> Sent: Tuesday, April 17, 2018 4:02 PM
> To: java-user@lucene.apache.org
> Subject: Re: Help! - Max Segment name reached
>
> Hi,
>
> Crea
t: Re: Help! - Max Segment name reached
Hi,
Create a new empty index in a new directory and use addIndex() using the other
directory with the broken index.
This will copy all segments but renumber them.
Uwe
Am April 17, 2018 3:52:27 PM UTC schrieb Stuart Goldberg
:
>We have an index that
Hi,
Create a new empty index in a new directory and use addIndex() using the other
directory with the broken index.
This will copy all segments but renumber them.
Uwe
Am April 17, 2018 3:52:27 PM UTC schrieb Stuart Goldberg
:
>We have an index that has run into this bug:
>https://issues.apach
We have an index that has run into this bug:
https://issues.apache.org/jira/browse/LUCENE-7999
Although this is reported to be fixed in Lucene 7.2, we are at 4.10.4 and
cannot upgrade.
By looking at the code it seems that the last segment number counter is
persisted in segment_h. When creat
Since you are writing a custom token filter, it's up to you to return
successive tokens by setting the appropriate attributes when nextToken is
called. Have you read the tokenstream javadocs?
On Mar 15, 2018 10:35 AM, "deepu srinivasan" wrote:
> Hi .
> How do i split a single token and index the
Hi .
How do i split a single token and index them both. For eg : if i recieve a
token "&&11&" in my custom token filter , i would like to index as "&" and
11.
s. I can't think of
> > settings that would make it more efficient.
> >
> > If you call deleteDocuments because you are eg. deleting data after a
> given
> > age, it would help to have time-based indices so that you would remove an
> > entire index at once rath
not spent on deleted docs. I can't think of
> settings that would make it more efficient.
>
> If you call deleteDocuments because you are eg. deleting data after a given
> age, it would help to have time-based indices so that you would remove an
> entire index at once rather t
so
that most disk space is not spent on deleted docs. I can't think of
settings that would make it more efficient.
If you call deleteDocuments because you are eg. deleting data after a given
age, it would help to have time-based indices so that you would remove an
entire index at once rather
I call deleteDocuments
On Feb 28, 2018 8:16 PM, "Adrien Grand" wrote:
> What do you mean by purging? What methods do you call?
>
> Le mer. 28 févr. 2018 à 19:34, Stuart Goldberg a
> écrit :
>
> > I have huge lucene index. On disk it's about 24Gb.
> >
> >
> >
> > I have a purging routine that is
What do you mean by purging? What methods do you call?
Le mer. 28 févr. 2018 à 19:34, Stuart Goldberg a
écrit :
> I have huge lucene index. On disk it's about 24Gb.
>
>
>
> I have a purging routine that is supposed to run and purge old docs.
>
>
>
> There are about 650 million docs in there and
I have huge lucene index. On disk it's about 24Gb.
I have a purging routine that is supposed to run and purge old docs.
There are about 650 million docs in there and through testing I have
determined that about 1/3 of these need to be purged.
During the purge, every so often it's appare
ession technique of lucene.
>
> So, could you please help us with the way to implement the Compression of
> index size by using Codec or any other approach.
> Even if it’s a java sample is also fine for us, we try to understand the
> implementation and port it to .Net.
>
> K
the
compression technique of lucene.
So, could you please help us with the way to implement the Compression of index
size by using Codec or any other approach.
Even if it’s a java sample is also fine for us, we try to understand the
implementation and port it to .Net.
Kindly please revert
Thankx Adrien. I'll try this approach too.
- Best
Parit Bansal
On 01/05/2018 10:43 AM, Adrien Grand wrote:
You can use PerFieldSimilarityWrapper to have different BM25 settings per
field.
Le ven. 5 janv. 2018 à 10:37, Parit Bansal a
écrit :
Hi Robert,
passing b = 0 will influence the simil
You can use PerFieldSimilarityWrapper to have different BM25 settings per
field.
Le ven. 5 janv. 2018 à 10:37, Parit Bansal a
écrit :
> Hi Robert,
>
> passing b = 0 will influence the similarity across all the fields (no?)
> . I wanted it to be field specific. I think Uwe's suggestion of not
> i
Hi Robert,
passing b = 0 will influence the similarity across all the fields (no?)
. I wanted it to be field specific. I think Uwe's suggestion of not
indexing norms for specific fields should work better.
Thankx again.
- Best
Parit Bansal
On 01/04/2018 08:34 PM, Robert Muir wrote:
You do
Hi Robert,
passing b = 0 will influence the similarity across all the fields (no?)
. I wanted it to be field specific. I think Uwe's suggestion of not
indexing norms for specific fields should work better.
- Best
Parit Bansal
On 01/04/2018 08:34 PM, Robert Muir wrote:
You don't need to do
Hi Uwe,
You are right. Thankx! :)
- Best
Parit Bansal
On 01/04/2018 05:02 PM, Uwe Schindler wrote:
How about just indexing the field without norms?
Uwe
Am January 4, 2018 3:58:27 PM UTC schrieb Parit Bansal :
Hi,
I am trying to tweak BM25Similarity for my use case wherein, I want to
avoid
You don't need to do any subclassing for this: just pass parameter b=0
to the constructor.
On Thu, Jan 4, 2018 at 10:58 AM, Parit Bansal wrote:
> Hi,
>
> I am trying to tweak BM25Similarity for my use case wherein, I want to avoid
> the effects of field-length normalization for certain fields (re
How about just indexing the field without norms?
Uwe
Am January 4, 2018 3:58:27 PM UTC schrieb Parit Bansal :
>Hi,
>
>I am trying to tweak BM25Similarity for my use case wherein, I want to
>avoid the effects of field-length normalization for certain fields
>(return a constant value irrespective
Hi,
I am trying to tweak BM25Similarity for my use case wherein, I want to
avoid the effects of field-length normalization for certain fields
(return a constant value irrespective of how long was the document).
Currently, both computeWeight and computeNorm methods are defined final
in BM25Sim
6.6.2
-- Original --
From: "wujun";
Date: Thu, Nov 23, 2017 02:23 PM
To: "java-user";
Subject: who can help?
IndexSearcher searcher = new IndexSearcher(DirectoryReader.open(dir));
List leafReaderContextList =
searcher.getTopRe
IndexSearcher searcher = new IndexSearcher(DirectoryReader.open(dir));
List leafReaderContextList =
searcher.getTopReaderContext().leaves();
for (LeafReaderContext leafReaderContext : leafReaderContextList)
{
long min = leafReaderContext.docBase;
long count = leafReaderContext
.
text_spell
en_AT
org.apache.solr.spelling.suggest.Suggester
org.apache.solr.spelling.suggest.fst.AnalyzingInfixLookupFactory
autosuggest_en_at
text_autocomplete
false
false
0.55
${solr.data.dir:./data}/${solr.core.name}/infix_suggester
false
Could you help me
.apache.lucene.codecs.lucene40.Lucene40CompoundReader
>overrides
>final method renameFile.
>
>Can you help me ?
>
>Thanks
--
Uwe Schindler
Achterdiek 19, 28357 Bremen
https://www.thetaphi.de
oundReader overrides
final method renameFile.
Can you help me ?
Thanks
Hello
I am indexing java source code files. I need to know how indexi or tokenize
camel case words in identifiers, method names, clases , etc. e.g.
getSystemRequirements.
I am using lucene 3.0.1.
Thank you,
--
Atentamente,
*Andrés Fernando Wilches Riaño*
Ingeniero de Sistemas y Computación
E
using TYPE.setDocValuesType(DocValuesType.SORTED);
it works.
I didnt undestand the reasons. Maybe for for fast grouping is necessary
maybe to sorting , so algo can find distinct groups
2016-08-18 17:40 GMT+02:00 Cristian Lorenzetto <
cristian.lorenze...@gmail.com>:
> in my old code
>
> i create
1 - 100 of 885 matches
Mail list logo