Hi :)
I would like to retrieve a document from my index which must have a
field called "tag" for which the value must be "(01) value1" or
"value2". I'm using the lucene java api.
My code is the following:
String expression = "tag:(\"(01) value1\" value2)"
QueryParser parser = new QueryParser
Like Nischal, did you check that you don't call the commit() method
after each indexed document? :)
Regards,
Gary Long
Le 27/10/2014 16:47, Jason Wu a écrit :
Hi Fuad,
Thanks for your suggestions and quick response. I am using a single-threaded
indexing way to add docs. I will try the multipl
Token() to process the next token. This will
affect your positions so if you're doing phrase search you will need to adjust
the position attribute to account for the now-empty token.
-Original Message-----
From: G.Long [mailto:jde...@gmail.com]
Sent: Thursday, October 09, 2014 7:54 AM
To:
Hi :)
I wrote a custom token filter which removes special characters.
Sometimes, all characters of the token are removed so the filter
procudes an empty token. I would like to remove this token from the
tokenstream but i'm not sure how to do that.
Is there something missing in my custom toke
92 (decimal 146) is listed as
"Private Use 2", so who knows what it might display as. All that is
important is the binary/hax value.
Out of curiosity, how did your application come about picking a PU
Unicode character?
-- Jack Krupansky
-Original Message----- From: G.Long
Se
p://www.thetaphi.de
eMail: u...@thetaphi.de
-Original Message-
From: G.Long [mailto:jde...@gmail.com]
Sent: Monday, March 03, 2014 6:09 PM
To: java-user@lucene.apache.org
Subject: encoding problem when retrieving document field value
Hi :)
My index (Lucene 3.5) contains a field called title. It
Hi :)
My index (Lucene 3.5) contains a field called title. Its value is
indexed (analyzed and stored) with the WhitespaceAnalyzer and can
contains html entities such as ’ or °
My problem is that when i retrieve values from this field, some of the
html entities are missing.
For example :
Lu
the match
exact rather than fuzzy.
What terms does your index have? XV, Xv, xV, xv? XV~0.7 may only match
XV.
-- Jack Krupansky
-Original Message----- From: G.Long
Sent: Thursday, February 27, 2014 12:15 PM
To: java-user@lucene.apache.org
Subject: Fuzzy query on capital letters does
Hi :)
In my lucene index, there are documents with a field title. values of
this field are indexed with a whitespace analyzer. When I search for
documents, I create a boolean query which includes fuzzy queries for the
title. The final query looks like: +tnc_title:portant~0.7
+tnc_title:créati
Hi :)
I'm using Lucene 3.1 and I would like to use fuzzy query on a field
which contains the title of my document. This field is indexed and
stored using the standard analyzer.
I found the ComplexPhraseQueryParser class which seems to support fuzzy
option. Here is the code I'm using to creat
uke to see what actually has been indexed.
Look at Query.toString() to see how the query has been parsed.
Read the bit of the FAQ titled something like "Why are my searches not
working?".
--
Ian.
On Wed, Nov 7, 2012 at 3:50 PM, G.Long wrote:
Hi :)
I would like the "text"
Hi :)
I would like the "text" field of my index to be case-insensitive.
I'm using a PerFieldAnalyzerWrapper with a standardAnalyzer for this
field for both indexing and querying. I read that StandardAnalyzer uses
LowerCaseFilter to lowercase the value of the field but when I run a
query, it do
Le 25/06/2012 15:59, Deshpande, Vikas a écrit :
-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org
Hi :)
Just send an e-mail to java-user-unsub
Hi :)
Did you try with "\\-1234" ?
Regards
Le 24/04/2012 13:40, S Eslamian a écrit :
Thank you but when I search this : Query termQuery = new TermQuery
("field","\-1234"); I get this exception :
Invalid escape sequence (valid one are \b \t \n \f \r \" \' \\)
Am I making mistake in creating my
n the
org.apache.lucene.analysis.path package (in lucene-analyzers.jar)
In your case, looks like ReversePathHierarchyTokenizer might be what
you want, though you will need to upgrade to at least 3.2 to get it.
On Mon, Feb 13, 2012 at 11:38 AM, G.Long wrote:
Hi,
Is there a way to improve query performance
Hi,
Is there a way to improve query performance when using a leading * as a
wildcard on a path property?
I have hundreds of queries to run on a lucene index (~250mo). Executing
those queries without the leading * is about 5x faster than with the
leading *. My problem is that I sometimes need
tiple terms whereas you've only got the one - because
you used KeywordAnalyzer.
--
Ian.
On Wed, Sep 7, 2011 at 1:35 PM, G.Long wrote:
Hi Rick :)
I found the problem but I think It needs explanations.
In the code where the query was created, there was a peace of code as
follo
u open in Luke. You'd
be amazed how much time I've spent tracking down
mistakes like that.
Best
Erick
"It's not the things you don't know that'll kill you, it's the things
you *do* know that aren't true".
On Wed, Sep 7, 2011 at 5:47 AM, G.Long w
Hi :)
I have a lucene index with fields analyzed with Keyword Analyzer. In my
java program, I search for a document by creating a query with two
boolean parameters like : +param1:"foo" +param2:"bar"
The query return no result but If I run the same query with Luke, it
returns the result I'm l
Hi :)
You can send an email to java-user-unsubscr...@lucene.apache.org to
unsuscribe from the Lucene java users mailing list :)
Regards,
Gary
Le 20/07/2011 03:46, 郭大伟 a écrit :
Hello,
I'm receiving more than 50 e-mails per day, which are sended by
java-user-return-50172-kavguodawei=126...
Thank you for your advices :)
I'll try this.
Regards,
Gary
Le 21/06/2011 22:28, Danny Lade a écrit :
IMO, a "reversed word Index" does not work in this case, because he's looking
for a word in the middle (See curi*).
Another idea is to build word chunks and save them in a second index plus d
neFAQ#What_wildcard_search_support_is_available_from_Lucene.3F
Be sure to heed the warnings about performance.
--
Ian.
On Tue, Jun 21, 2011 at 4:27 PM, G.Long wrote:
Hi :)
I've got the following text indexed with simpleAnalyzer :
"security is a real problem."
If I try to search for secu*, it will find the document. But if
Hi :)
I've got the following text indexed with simpleAnalyzer :
"security is a real problem."
If I try to search for secu*, it will find the document. But if I try to
search for curi*, there are no results.
I raed that it's not possible to add a * wildcard at the begining of the
query so wh
Ok, I'll try this.
But will it work if one of the fields has no analyzers assigned ?
For example field1 is associated with a keyword analyzer, field2 with a
standardAnalyzer and field3 has no analyzer because it was indexed as
Field.Index.NOT_ANALYZED. Is there something to specify in the
co
Hi :)
I know it is possible to create a query on different fields with
different analyzers with PerFieldAnalyzer class but is it possible to
also include fields which are not analyzed ?
I want some fields not to be tokenized (an exact reference of an article
for example) and others to be tok
Hi :)
In my index, there are documents like :
doc { question: 1, response: 1, word: excellent }
doc { question 1, response: 1, word: great }
doc { question 1, response: 2, word: bad }
doc { question 1, response: 2, word: excellent}
doc { question 2, response: 1, word: car}
doc { question 2, resp
) ?
Regards,
Le 30/05/2011 17:25, bmdakshinamur...@gmail.com a écrit :
I think you are looking for a keyword analyzer.
http://lucene.apache.org/java/3_0_2/api/core/org/apache/lucene/analysis/KeywordAnalyzer.html
On Mon, May 30, 2011 at 8:48 PM, G.Long wrote:
Hello :)
I'm wondering which Ana
Hello :)
I'm wondering which Analyzer would be the best to query exact value for
a property. I read the javadoc and it it said that when a document is
indexed, I could use the Field.Index.NOT_ANALYZED to store the value as
is and then I would be able to query for it. But in the same time, I
n
I set the field article to NOT_ANALYZED and I didn't quoted the article
values in the range part of the query and it looks like it works better now.
However, some results are still missing. For exemple, sometimes a range
like [l220-2 TO l220-10] will not return any results (although i'm sure
t
I added a standard analyzer and a Query Parser to parse each boolean
clause of my query and i got some results :)
But now there are some strange behaviors.
the following queries :
+code:CCOM +article:"l123-12"
+code:CCOM +article:"l123-13"
+code:CCOM +article:"l123-14"
return one result.
Howe
Hi Uwe :)
Thank you for your answer ! Now I have another problem. Here is the code
I use to query the index :
ScoreDoc[] hits = null;
TopFieldCollector collector = TopFieldCollector.create(new
Sort(SortField.FIELD_DOC), 20, true, false, false, false);
Directory directory =
Hi there :)
I would like to perform a range query on a lucene index. I'm using
lucene 3.1 api.
I looked at the javadoc and found a rangeQueryNode but i'm not sure how
to use it.
I've got a field "article" in my index which is indexed this way :
entry.add(new Field("article", article, Field.S
32 matches
Mail list logo