It looks like your pk_p and pk_c fields aren't indexed -- they just have
doc values.
If you try making them KeywordFields instead (so they're indexed and have
doc values), does it work?
Also, the join module may be overkill for what you're trying to do, since
it looks like you're indexing parent/
or such need, where
> Token and Payload class both are not there now?
>
> Regards
> Rajib
>
> -Original Message-
> From: Uwe Schindler
> Sent: 10 February 2023 15:36
> To: java-user@lucene.apache.org
> Subject: Re: Need help for conversion code from Lucene 2.4.0 t
Sent: 10 February 2023 15:36
To: java-user@lucene.apache.org
Subject: Re: Need help for conversion code from Lucene 2.4.0 to 8.11.2
Hi,
the reason for this is that files in Lucene are always write-once. We
never ever change a file after it was written and committed in the
2-phase-commit. If you
file with the new logic.
Regards
Rajib
-Original Message-
From: Uwe Schindler
Sent: 10 February 2023 15:36
To: java-user@lucene.apache.org
Subject: Re: Need help for conversion code from Lucene 2.4.0 to 8.11.2
Hi,
the reason for this is that files in Lucene are always write-once. We
Exception e) {
e.printStackTrace();
}
==
Regards
Rajib
-Original Message-
From: Uwe Schindler
Sent: 06 February 2023 16:46
To: java-user@lucene.apache.org
Subject: Re: Need help for conversion code from Lucene 2.4.0 to 8.11.2
Hi,
Since around Lucene 4 (maybe alre
output.close();
} catch(Exception e) {
e.printStackTrace();
}
==
Regards
Rajib
-Original Message-
From: Uwe Schindler
Sent: 06 February 2023 16:46
To: java-user@lucene.apache.org
Subject: Re: Need help for conversion code f
places.
Currently, this API is not there.
Could you please suggest, how we can handle the API in 8.11.2?
Regards
Rajib
-Original Message-
From: Mikhail Khludnev
Sent: 01 February 2023 12:22
To: java-user@lucene.apache.org
Subject: Re: Need help for conversion code from Lucene 2.4.0 to
.
Currently, this API is not there.
Could you please suggest, how we can handle the API in 8.11.2?
Regards
Rajib
-Original Message-
From: Mikhail Khludnev
Sent: 01 February 2023 12:22
To: java-user@lucene.apache.org
Subject: Re: Need help for conversion code from Lucene 2.4.0 to 8.11.2
ndexWriter.optimize()
>
> Is there any similar concept in 8.11? If so, can you please help with APIs
>
org.apache.lucene.index.IndexWriter#addIndexes(org.apache.lucene.store.Directory...)
But it kicks merge underneath. Should be fine.
===
>
> Regard
37
To: java-user@lucene.apache.org
Subject: RE: Need help for conversion code from Lucene 2.4.0 to 8.11.2
Hi Mikhail,
Thanks for your suggestion. It solved lots of cases today in my end. 😊
I need some more suggestions from your end. I am putting together as below one
b
? If so, can you please help with APIs
===
Regards
Rajib
-Original Message-
From: Mikhail Khludnev
Sent: 29 January 2023 18:05
To: java-user@lucene.apache.org
Subject: Re: Need help for conversion code from Lucene 2.4.0 to 8.11.2
Hello,
You can use
---Original Message-
> From: Mikhail Khludnev
> Sent: 19 January 2023 04:26
> To: java-user@lucene.apache.org
> Subject: Re: Need help for conversion code from Lucene 2.4.0 to 8.11.2
>
> [You don't often get email from m...@apache.org. Learn why this is
> important at
n 8.11.2.
Could you please suggest someway to extract all the Terms with an IndexReader
or some alternative ways?
Regards
Rajib
-Original Message-
From: Mikhail Khludnev
Sent: 19 January 2023 04:26
To: java-user@lucene.apache.org
Subject: Re: Need help for conversion code from Lucene 2
Hello, Rajib.
API were evolved since 2.4, but it should be clear
https://lucene.apache.org/core/8_11_2/core/org/apache/lucene/index/package-summary.html#fields
On Wed, Jan 18, 2023 at 1:11 PM Saha, Rajib
wrote:
> Hi All,
>
> We are in a process for conversion of Lucene from 2.4.0 to 8.11.2 for
Hello Rajib.
You can start from
https://lucene.apache.org/core/8_11_1/core/org/apache/lucene/analysis/package-summary.html#package.description
Also, it might make sense to go through for analysis/token mentions in
https://lucene.apache.org/core/8_11_1/MIGRATE.html and also MIGRATE.txt in
every (??
Hi Lokesh
IIUC each document (like for example a shop description) has a longitude
and a latitude associated with.
The user search input are some keywords and the the user's geo location.
The keywords you use to search for the documents and the users's geo
location you would like to use for
I have created a custom Collector extending SimpleCollector. I can see the
methods scoreMode() and collect(int doc).
I am seeing that the collect method is invoked by lucene with the child
docId. Am I moving in the right direction?
But to collect the values I would need the Document by using
read
Indeed you shouldn't load all hits, you should register a
org.apache.lucene.search.Collector that will aggregate data while matches
are being collected.
Since you are already using a ToChildBlockJoinQuery, you should be able to
use it in conjunction with utility classes from lucene/facets. Have yo
Hi Adrien,
Thanks for the reply.
I am able to retrieve the child docId's using the .ToChildBlockJoinQuery.
Now for me to do aggregates i need to find the document using
reader.document(int docID) right?. If that is the case won't getting all
the documents would be a costly operation and then fina
It's not straightforward as we don't provide high-level tooling to do this.
You need to use the BitSetProducer that you pass to the
ToParentBlockJoinQuery in order to resolve the range of child doc IDs for a
given parent doc ID (see e.g. how ToChildBlockJoinQuery does it), and then
aggregate over t
Hi Bhaskar:
or everyone's benefit, I hope you will collate the emails into a
wiki page and carry it forward. Meritocracy's might have rtfm'd the
whole thing.
With all respect:
Will
On 10/5/15 1:06 PM, Bhaskar wrote:
Hi,
Actually I am looking for auto complete only. Do we have auto sugg
Hi,
Actually I am looking for auto complete only. Do we have auto suggest
module in lucene?
can you suggest some examples?
Thanks in advance.
Regards,
Bhaskar
On Mon, Oct 5, 2015 at 10:30 PM, Alessandro Benedetti <
benedetti.ale...@gmail.com> wrote:
> +1 on Jack,
> furthermore, are you taking ab
+1 on Jack,
furthermore, are you taking about search or autocomplete ?
If you only need autocompletion on the term, maybe it's even better if you
take a look to the Lucene suggest module !
Cheers
2015-10-05 14:34 GMT+01:00 Jack Krupansky :
> Sounds like you need the edge n-gram filter at index t
Sounds like you need the edge n-gram filter at index time to index all of
the prefix strings for each term. Just be aware that using an n-gram filter
will explode the size of the index (all the extra terms)
The standard tokenizer and word delimiter filter will split terms on
special characters, so
Curious if you've tried escaping with \ ie 143\-00098
On Monday, October 5, 2015, Bhaskar wrote:
> Hi,
>
>
> when I type 143-00098 I should get all matched result i.e ( 143-00098,
> 143-000981, 143-0009823). also If i type 143-000 then i should 143-00098,
> 143-0009, 143-0001)
>
> Looks like the
data and the queries you want to do.
> Maybe
> >> use WhitespaceAnalyzer or better StandardAnalyzer as a first step. Be
> sure
> >> to reindex your data before querying. The Analyzer used on the search
> side
> >> must be the same like on the query side. If you want to
> > > >
> > >
> >
> https://lucene.apache.org/core/5_3_1/analyzers-common/org/apache/lucene/analysis/core/LetterTokenizer.html
> > > > >> "A LetterTokenizer is a tokenizer that divides text at
> non-letters.
> > > > That's
>
okenizer.html
> > > >> "A LetterTokenizer is a tokenizer that divides text at non-letters.
> > > That's
> > > >> to say, it defines tokens as maximal strings of adjacent letters, as
> > > >> defined by java.lang.Character.isLetter() predic
I'd suggest to first inform yourself about analysis and choose a
> better
> > >> one that suits your underlying data and the queries you want to do.
> > Maybe
> > >> use WhitespaceAnalyzer or better StandardAnalyzer as a first step. Be
> > sure
> > >> to r
ries you want to do.
> Maybe
> >> use WhitespaceAnalyzer or better StandardAnalyzer as a first step. Be
> sure
> >> to reindex your data before querying. The Analyzer used on the search
> side
> >> must be the same like on the query side. If you want to use wildc
be the same like on the query side. If you want to use wildcards, you
>> have to take care more, because wildcards are not really natural for "full
>> text search engine" and may cause inconsistent results.
>>
>> Uwe
>>
>> -
>> Uwe Schindler
>>
> must be the same like on the query side. If you want to use wildcards, you
> have to take care more, because wildcards are not really natural for "full
> text search engine" and may cause inconsistent results.
>
> Uwe
>
> -
> Uwe Schindler
> H.-H.-Meier-Al
213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
> > -Original Message-----
> > From: Bhaskar [mailto:bhaskar1...@gmail.com]
> > Sent: Wednesday, September 30, 2015 4:28 AM
> > To: java-user@lucene.apache.org
> > Subject: Re: Need help in alphanu
/www.thetaphi.de
eMail: u...@thetaphi.de
> -Original Message-
> From: Bhaskar [mailto:bhaskar1...@gmail.com]
> Sent: Wednesday, September 30, 2015 4:28 AM
> To: java-user@lucene.apache.org
> Subject: Re: Need help in alphanumeric search
>
> Hi Uwe,
>
> Below is my indexing
gt; searching.
>
> Uwe
>
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
>
> > -Original Message-
> > From: Erick Erickson [mailto:erickerick...@gmail.com]
> > Sent: Monday, Sep
e
>
>
>> -Original Message-
>> From: Erick Erickson [mailto:erickerick...@gmail.com]
>> Sent: Monday, September 28, 2015 6:01 PM
>> To: java-user
>> Subject: Re: Need help in alphanumeric search
>>
>> You need to supply the definitions of this field
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de
> -Original Message-
> From: Erick Erickson [mailto:erickerick...@gmail.com]
> Sent: Monday, September 28, 2015 6:01 PM
> To: java-user
> Subject: Re: Need help in alphanumeric sea
You need to supply the definitions of this field from
your schema.xml file, both the and
Additionally, please provide the results of the query
you're trying with &debug=true appended.
The adminUI/analysis page is very helpful in these
situations as well. Select the appropriate core from
the dro
Thanks Lan for reply.
cpn values are like 123-0049, 342-043, ab23-090, hedwsdg
my application is working when i gave search for below inputs
1) ab*
2)hedwsdg
3) hed*
but it is not working for
1) 123*
2) 123-0049
3) ab23*
Note: if the search input has number then it is not working.
Thanks in
Hi
Can you provide a few examples of values of cpn that a) are and b) are
not being found, for indexing and searching.
You may also find some of the tips at
http://wiki.apache.org/lucene-java/LuceneFAQ#Why_am_I_getting_no_hits_.2F_incorrect_hits.3F
useful.
You haven't shown the code that create
Hi,
Can you share the implementation of your analyzer. It might be the problem.
It will be helpful to share also a sample of your indexed documents.
Regards
Ameer
--
View this message in context:
http://lucene.472066.n3.nabble.com/Need-help-to-do-simple-line-by-line-indexing-and-search-tp415
Hi everybody
UerDictionary is right.
I am using yahoo Japanese tokenizer API (日本語形態素解析) to teach my own user
dictionary.
http://developer.yahoo.co.jp/webapi/jlp/
On 2014/03/11, at 8:10, Rahul Ratnakar wrote:
> Worked perfectly for Japanese.
>
> I have the same issue with Chinese Analyzer, I am
Worked perfectly for Japanese.
I have the same issue with Chinese Analyzer, I am using SmartChinese
(lucene-analyzers-smartcn-4.6.0.jar) but I don't see a similar interface as
the Japanese analyzer. Is there an easy way to implement the same for
Chinese?
On Mon, Mar 10, 2014 at 3:26 PM, Rahul R
Thanks Robert. This was exactly what I was looking for, will try this.
On Mon, Mar 10, 2014 at 3:13 PM, Robert Muir wrote:
> You can pass UserDictionary with your own entries to do this.
>
> On Mon, Mar 10, 2014 at 3:08 PM, Rahul Ratnakar
> wrote:
> > Thanks Furkan, This is the exact tool that
You can pass UserDictionary with your own entries to do this.
On Mon, Mar 10, 2014 at 3:08 PM, Rahul Ratnakar
wrote:
> Thanks Furkan, This is the exact tool that I am using, albeit in my code, I
> have tried all search modes e.g.
>
> new JapaneseAnalyzer(Version.LUCENE_46, null, JapaneseTokenizer
Thanks Furkan, This is the exact tool that I am using, albeit in my code, I
have tried all search modes e.g.
new JapaneseAnalyzer(Version.LUCENE_46, null, JapaneseTokenizer.Mode.NORMAL,
JapaneseAnalyzer.getDefaultStopSet(), JapaneseAnalyzer.getDefaultStopTags())
new JapaneseAnalyzer(Version.LUCENE
Hi;
Here is the page of it that has a online Kuromoji tokenizer and
information: http://www.atilika.org/ It may help you.
Thanks;
Furkan KAMACI
2014-03-10 19:57 GMT+02:00 Rahul Ratnakar :
> I am trying to analyze some japanese web pages for presence of slang/adult
> phrases in them using lucen
There is also this page: http://wiki.apache.org/lucene-java/HowToContribute
2014-02-25 12:41 GMT+02:00 Furkan KAMACI :
> Hi;
>
> You can check this page: http://wiki.apache.org/solr/HowToContribute
>
> Thanks;
> Furkan KAMACI
>
>
> 2014-02-25 12:32 GMT+02:00 chandresh pancholi <
> chandreshpanch
Hi;
You can check this page: http://wiki.apache.org/solr/HowToContribute
Thanks;
Furkan KAMACI
2014-02-25 12:32 GMT+02:00 chandresh pancholi <
chandreshpancholi...@gmail.com>:
> Hi Fellow members,
>
> I am new to apache lucene community. i clone the svn repo to my local. I am
> planning to con
Hello Mike
We tried the following code, but it is giving null :
TopGroups hits =
c.getTopGroups(
productitemQuery,
Sort.RELEVANCE,
0, // offset
10, // maxDocsPerGroup
0, // withinGroupOffset
true // fillSortFields
);
On Thu, Jan 30, 2014 at 2:35 AM, Michael McCandless
After indexsearcher.search you should call c.getTopGroups? See the
TestBlockJoin.java example...
Can you boil this down to a runnable test case, i.e. include
createProductItem/createProduct sources, etc.
Mike McCandless
http://blog.mikemccandless.com
On Thu, Jan 30, 2014 at 2:20 AM, Priyanka
Hi, I just responded on your previous thread about this ... maybe you
didn't see it (you need to subscribe to java-user@lucene.apache.org to
see responses).
Mike McCandless
http://blog.mikemccandless.com
On Fri, Sep 13, 2013 at 1:12 AM, nischal reddy
wrote:
> Hi,
>
> I am confused a bit about
Hi Vignesh,
This is a very broad question! The following links might help you:
- Lucene documentation: http://lucene.apache.org/core/4_1_0/index.html
- File formats:
http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/codecs/lucene41/package-summary.html#package_description
- The block t
Hi Yogesh,
I bet you are indexing A as an analyzed field and its values are getting
tokenized at each capital letter it finds. Try to index field A using
Field.Index.NOT_ANALYZED.
On Sun, Apr 15, 2012 at 3:44 AM, Yogesh patel
wrote:
> Hi,
>
> I have read apache lucene tutorial and implemented in
highlight wild card query results from 2.2.0 onwards
also, I am confident that it will work in 2.4.1.
Thanks,
Vidya
-Original Message-
From: Ian Lea [mailto:ian@gmail.com]
Sent: Tuesday, October 18, 2011 1:52 PM
To: java-user@lucene.apache.org
Subject: Re: Need Help for Wild Card Query
Why 2.4.1? That is ancient and there have been many improvements since then.
Google finds hits for "lucene highlight wild card" some of which
contain some solutions some of which may or may not be relevant for
your problem.
--
Ian.
On Tue, Oct 18, 2011 at 8:17 AM, Vidya Kanigiluppai Sivasubra
Hi Grant,
Thanks for the reply.
I would definitely look into Solr Deduplication approch. But since I am
using pure lucene and not Solr, I am not sure how feasible that would be to
find something in lucene or try duplicating it. But thats looks to be the
way forward.
Also regarding the question a
I'd probably treat this as a deduplication problem and look to use a fuzzy
matching approach, such as the TextProfileSignature in Solr/Nutch:
http://wiki.apache.org/solr/Deduplication, which I believe is tunable as to
it's threshold of acceptance.
I'd also likely give pushback on the notion of
Can some one pls help with the logic that can be applied to decide on the
closeness requirement given below (like 50% matching). This matching is a
pure text matching.
Since the current lucene score does not translate into the percentage of
closeness, is there anything else that can give this info
: It is strange that I was suggested not to call commit explicitly and leave
: it to the lucene but it seems it has its own disadvantages.
as long as you commit/close the writer cleanly on shutdown you should be
fine ... i don't think you need to be so agressive as to call it on ever X
docs (un
Hi Ian
Thanks for looking into the issue. And you are right. Its not this code
which was causing the issue.
The issue was as follows: (I just successfully performed a test run)
*ISSUE:*
My code had following characteristics.
1. CREATE_OR_APPEND way of opening indexWriter.
2. No explicit call to
Code looks fine and will not zap the current contents of indexDir.
Something else must be - another call with OpenMode.CREATE? Where is
indexDir - could tomcat be zapping it on startup? Some other job?
--
Ian.
On Thu, Jul 28, 2011 at 8:12 PM, Saurabh Gokhale
wrote:
> Hi All,
>
> I am using f
You might want to look at ManifoldCF too.
http://incubator.apache.org/connectors/
Karl
-Original Message-
From: ext Marlen [mailto:zmach...@facinf.uho.edu.cu]
Sent: Tuesday, June 21, 2011 9:49 AM
To: java-user@lucene.apache.org
Subject: need help
I need to create a search engine that s
Hello Cheta,
Check this site : http://www.lucidimagination.com/blog/2009/03/09/nutch-solr/
Vinaya
-Original Message-
From: Marlen [mailto:zmach...@facinf.uho.edu.cu]
Sent: Tuesday, June 21, 2011 7:19 PM
To: java-user@lucene.apache.org
Subject: need help
I need to create a search engi
thank you very much Ian.
On 13/06/2011 9:17, Ian Lea wrote:
Hello
Lucene can be used for searching pretty much anything. But it is a
library, not an application, and you'll have to write code to make it
do what you want. You might be better off using Solr. It uses lucene
but provides lots of
Hello
Lucene can be used for searching pretty much anything. But it is a
library, not an application, and you'll have to write code to make it
do what you want. You might be better off using Solr. It uses lucene
but provides lots of stuff on top.
http://lucene.apache.org/solr/features.html
-
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de
> -Original Message-
> From: dian puma [mailto:dianp...@gmail.com]
> Sent: Monday, October 25, 2010 6:03 AM
> To: java-user@lucene.apache.org
> Subject: Re: Need Help: Lucene with PHP
Hi.
I still have problem with it
My code worked well when I run it by command line, ex."php srcLucene.php"
But it didn't work on web browser, still got an error like this.
indexing ... Exception occured: [[o:Exception]:"java.lang.Exc
Ahaaa, you're right, I used lucene-core-3.0.1.jar and I got an error on it
cause
the IndexWriter Constructor cannot work.
When I try to use the lower version, it works.
Thank alot
On Sun, Oct 24, 2010 at 6:52 AM, Uwe Schindler wrote:
> Are you sure that you use the same Lucene version? If you u
Are you sure that you use the same Lucene version? If you use latest (3.0.x)
now, then your IndexWriter ctor cannot work, because you have to call
FSDirectory.open() in java code first. Directly passing a native
java.io.File to IW is no longer possible. So maybe it simply does not find
the correct
Hi,
Have you compared your java version in these two boxes? Also PHP version?
Did you run indexer from command line or from browser?
I used Zend java bridge before and found java version too low may cause
problem?
Best regards, Lisheng
-Original Message-
From: dian puma [mailto:dianp.
What exactly is the problem? The standard idiom nowadays for
iterating through a List is what you're using in "for (Field field :
...)". I haven't used an iterator for a long time.
But perhaps your iteration code is working and the problem is in your
search code. The javadoc for search(query, n
thanks Jayendra...it was really helpful
On Sat, Aug 7, 2010 at 6:07 PM, jayendra patil wrote:
> Trying to put up an explanation :-
>
> 0.022172567 = (MATCH) product of:
> 0.07760398 = (MATCH) sum of:
> 0.02287053 = (MATCH) weight(payload:ces in 550), product of:
> 0.32539415 = queryWeight(
Trying to put up an explanation :-
0.022172567 = (MATCH) product of:
0.07760398 = (MATCH) sum of:
0.02287053 = (MATCH) weight(payload:ces in 550), product of:
0.32539415 = queryWeight(payload:ces), product of:
2.2491398 = *idf*(docFreq=157, maxDocs=551)
0.14467494 = queryNor
. In the "UserQuery" tag in the XSL there is a
>> "fieldName" tag which is set to "description". The "jobDescription"
>> default fieldname passed to the XML parser would only be in effect for
>> any tags that didn't specify a fieldName..
The "jobDescription"
> default fieldname passed to the XML parser would only be in effect for any
> tags that didn't specify a fieldName..
>
> BTW, in the source distribution there are full "DTDdocs" for the XML
> syntax in contrib\xml-query-parser\docs
>
tags that didn't specify
a fieldName..
BTW, in the source distribution there are full "DTDdocs" for the XML syntax in
contrib\xml-query-parser\docs
Cheers
Mark
- Original Message
From: syedfa
To: java-user@lucene.apache.org
Sent: Wed, 23 December, 2009 5:03:00
Subje
I have found an error in the web.xml file, however, this DID NOT fix the
problem. Inside the web.xml file, there is the following snippet:
Default field used in standard Lucene QueryParser used
in UserQuery
tag
defaultSta
riginal Message
> > From: fulin tang
> > To: java-user@lucene.apache.org
> > Sent: Thu, November 26, 2009 9:10:41 PM
> > Subject: Re: Need help regarding implementation of autosuggest using
> jquery
> >
> > By the way , we search Chinese words, so Trie tr
November 26, 2009 9:10:41 PM
> Subject: Re: Need help regarding implementation of autosuggest using jquery
>
> By the way , we search Chinese words, so Trie tree looks not perfect
> for us either
>
>
> 2009/11/27 fulin tang :
> > We have the same needs in our musi
, 2009 at 3:19 PM, Uwe Schindler wrote:
>>
>>> You can fix this if you just create the initial term not with "", instead
>>> with your prefix:
>>> TermEnum tenum = reader.terms(new Term(field,prefix));
>>>
>>> And inside the while loop just br
", instead
>> with your prefix:
>> TermEnum tenum = reader.terms(new Term(field,prefix));
>>
>> And inside the while loop just break out,
>>
>> if (!termText.startsWith(prefix)) break;
>>
>> -----
>> Uwe Schindler
>> H.-H.-Meier-Allee 63, D-2821
/www.thetaphi.de
> eMail: u...@thetaphi.de
>
>
> > -Original Message-
> > From: DHIVYA M [mailto:dhivyakrishna...@yahoo.com]
> > Sent: Thursday, November 26, 2009 10:39 AM
> > To: java-user@lucene.apache.org
> > Subject: RE: Need help regarding implement
remen
http://www.thetaphi.de
eMail: u...@thetaphi.de
> -Original Message-
> From: DHIVYA M [mailto:dhivyakrishna...@yahoo.com]
> Sent: Thursday, November 26, 2009 10:39 AM
> To: java-user@lucene.apache.org
> Subject: RE: Need help regarding implementation of autosugges
; up with the last word available in the index file.
>
> Kindly suggest me a solution for this problem
>
> Thanks in advance,
> Dhivya
>
> --- On Wed, 25/11/09, Uwe Schindler wrote:
>
>
> From: Uwe Schindler
> Subject: RE: Need help regarding implementation of a
it starts displaying from the word matching the search word and ends up
with the last word available in the index file.
Kindly suggest me a solution for this problem
Thanks in advance,
Dhivya
--- On Wed, 25/11/09, Uwe Schindler wrote:
From: Uwe Schindler
Subject: RE: Need help
rding TermEnum
--- On Wed, 25/11/09, Uwe Schindler wrote:
From: Uwe Schindler
Subject: RE: Need help regarding implementation of autosuggest using jquery
To: java-user@lucene.apache.org
Date: Wednesday, 25 November, 2009, 9:54 AM
Hi Dhivya,
you can iterate all terms in the index using a Ter
Hi Dhivya,
you can iterate all terms in the index using a TermEnum, that can be
retrieved using IndexReader.terms(Term startTerm).
If you are interested in all terms from a specific field, position the
TermEnum on the first possible term in this field ("") and iterate until the
field name changes
I would also recommend you to play around with analyzers and see what
they really do. It is crucial for the work with lucene to know how the
analyzing works, why the same analyzers gernally should be used for
search and indexing.
A nice and somewhat mandatory tool to get around with lucene is the
I would appreciate if i can get help with the code as well.
If you want to tweak an existing example rather than coding entirely
from scratch the XMLQueryParser in /contrib has a demo web app for job
search with a "location" field similar in principle to your "state"
field plus it has a G
Hi Radha,
On 4/17/2009 at 6:19 AM, Radhalakshmi Sreedharan wrote:
> What I need is the following :
> If my document field is ( ab,bc,cd,ef) and Search tokens are
> (ab,bc,cd).
>
> Given the following :
> I should get a hit even if all of the search tokens aren't present
> If the tokens are f
On 4/17/2009 at 10:33 AM, Radhalakshmi Sreedharan wrote:
> > > I have a question related to SpanNearQuery.
> > >
> > > As of now, the SpanNearQuery has the constraint that all the
> > > terms need to present in the document.
[...]
> > > But [...] I need a hit even if there are 2/3 terms found with
On Friday 17 April 2009 16:33:27 Radhalakshmi Sreedharan wrote:
> Thanks Paul. Is there any alternative way of implementing this requirement?
Start from scratch perhaps? Anyway, spans can be really tricky, so in
case you're writing code for this, I have only four advices: test, test,
test and test
To: java-user@lucene.apache.org
Subject: Re: Need help : SpanNearQuery
To avoid passing all combinations to a NearSpansQuery
some non trivial changes would be needed in the spans package.
NearSpansUnOrdered (and maybe also NearSpansOrdered)
would have to be extended to provide matching Spans when
e.
>
> I even overrode the queryNorm method to return a one, still the percentage
> did not increase.
>
> Any suggestions ?
> -Original Message-
> From: Radhalakshmi Sreedharan [mailto:radhalakshm...@infosys.com]
> Sent: Friday, April 17, 2009 12:37 PM
> To:
seFreq=0.3334)
0.61370564 = idf(SearchField: cd=1 ef=1)
1.0 = fieldNorm(field=SearchField, doc=0)
Regards,
Radha
-Original Message-
From: Steven A Rowe [mailto:sar...@syr.edu]
Sent: Thursday, April 16, 2009 10:35 PM
To: java-user@lucene.apache.org
Subject: RE: Need help :
(SearchField: cd=1 ef=1)
1.0 = fieldNorm(field=SearchField, doc=0)
Regards,
Radha
-Original Message-
From: Steven A Rowe [mailto:sar...@syr.edu]
Sent: Thursday, April 16, 2009 10:35 PM
To: java-user@lucene.apache.org
Subject: RE: Need help : SpanNearQuery
Hi Radha,
On 4/16/2009 at 8
Hi Radha,
On 4/16/2009 at 8:35 AM, Radhalakshmi Sredharan wrote:
> I have a question related to SpanNearQuery.
>
> I need a hit even if there are 2/3 terms found with the span being
> applied for those 2 terms.
>
> Is there any custom implementation in place for this? I checked
> SrndQuery but t
>
> > writer = new IndexWriter("C:\\", new StandardAnalyzer(), true);
> > Term term = new Term("line", "KOREA");
> > PhraseQuery query = new PhraseQuery();
> > query.add(term);
>
StandardAnalyzer - used here while indexing - applies lowercasing.
The query is created programatically - i.e. without
java-user@lucene.apache.org
Subject: Re: Need help searching
What Analyzer is your searcher using?
C:\\ as the index location sounds "super funky".
Why not C:\\MyIndex , so your index files are not all mixed up with whatever
lives in C:\\
Otis
--
Sematext -- http://sematext.com/ -- L
1 - 100 of 103 matches
Mail list logo