During this change I had to change the way I store indexes. This change
results in too many .cfs and .fdt files generated against earlier.
Previously there were 5-7 files in index folder, now it has grown to 40+.
Does it affect having change in the way how indexes are stored internally
with this ch
//Method to create document
private static Document createDocumentTextField(HashMap
fields) {
Document document = new Document();
for (String key : fields.keySet()) {
String val = fields.get(key);
Field f = new TextField(key, val, Field.Store.YES);
Yes, it would be great if you could share code snippets. Maybe it will
help others or maybe someone will have a suggestion to improve or an
alternative.
All the best
Michael
Am 29.04.21 um 14:35 schrieb amitesh116:
Thank you Michael!
I solved this requirement by setting the tokenStream at t
Thank you Michael!
I solved this requirement by setting the tokenStream at the field level and
not leaving it to the analyzer. This gives control over altering the full
text before tokenization using custom methods.
This has memory overhead which is handled by writing the documents one at a
time
Hi Amitesh
Thanks for the more concrete examples.
Unfortunately I do not know how to solve this better with Lucene itself
in a more general context, but did you ever consider using BERT in
combination with Lucene/Solr
https://blog.google/products/search/search-language-understanding-bert/
ht
Hi Gus, Thank you your reply!
In my search system; users are complaining that they get results with
negation terms when don't expect. As explained in my original post. User
don't want to get documents having a term like "Non Vitamin K" when they
search for "Vitamin K".
But because each terms ar
behaviour, which is
> interesting from a pyschological point of view and it would be
> interesting to study it in more detail.
>
> Coming to your actual question:
>
> https://lucene.472066.n3.nabble.com/Negation-search-help-td4471842.html
>
> It seems to me that your use case makes
behaviour, which is
interesting from a pyschological point of view and it would be
interesting to study it in more detail.
Coming to your actual question:
https://lucene.472066.n3.nabble.com/Negation-search-help-td4471842.html
It seems to me that your use case makes an assumption that the sear
I badly need some help on this one. Someone please give some direction.
Regards
Amitesh
--
Sent from: https://lucene.472066.n3.nabble.com/Lucene-Java-Users-f532864.html
-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.a
The issue is solved. Luke was very helpful in debugging, infact it helped to
identify a very basic mistake we were making.
Lokeya wrote:
>
> I solved the issue by using:
>
> 1.Same Analyser.
> 2.Making indexing by tokenizing terms.
>
> Now issue with the following code is, I am facing issues
I solved the issue by using:
1.Same Analyser.
2.Making indexing by tokenizing terms.
Now issue with the following code is, I am facing issues which I have pasted
after the code, I searched the forum but couldn't find a relevant post :
QueryParser parser = new QueryParser("Title", analyzer);
Que
On Tuesday 10 April 2007 08:40, Lokeya wrote:
> But when i try to get hits.length() it is 0.
>
> Can anyone point out whats wrong ?
Please check the FAQ first:
http://wiki.apache.org/lucene-java/LuceneFAQ#head-3558e5121806fb4fce80fc022d889484a9248b71
Regards
Daniel
--
http://www.danielnaber.d
I have indexed the docs successfully under the directory "LUCENE" under
current directory, which have segments, _1.cfs and deletable.
Now trying to use the following code to search the index but not getting any
HITS. But when I try to read through Reader and get the document with field
mentioned
: That's what I'm doing now, but I was thinking that if I limit the number of
: results I get back, I can save query time. Maybe I'm wrong?
one thing that does slightly bug me about the way the Hits class works, is
that the constructor (which is called by the Searcher.search(Query) calls
getMore
To
java-user@lucene.apache.org
11/09/2005
08:44 cc
PM
Subject
Re: Search
cc
PM
Subject
Re: Search Help
On 9 Nov 2005, at 19:54, [EMAIL PROTECTED] wrote:
Is there a way to limit the number of hits I want returned?
Sometimes I
just want one document.
Is there an issue with just accessing hits.doc(0) in this case?
Erik
Is there a way to limit the number of hits I want returned? Sometimes I
just want one document.
~
Daniel Clark, Senior Consultant
Sybase Federal Professional Services
6550 Rock Spring Drive, Suite 800
Bethesda, MD 20817
Office - (301) 896-1103
Office Fax
18 matches
Mail list logo