Ganesh,
do you reuse your Document instances in any way or do you create new
docs for each add?
simon
On Tue, Feb 2, 2010 at 7:18 AM, Ganesh wrote:
> I am getting below exception, while adding documents. I am adding documents
> continously and at some point, i am getting the below exception. T
Try call rewrite on the query object to expand then call tostring on
the result.
Cheers,
Mark
-
On 1 Feb 2010, at 21:32, "Haghighi, Nariman" wrote:
> We are relying on the ComplexPhraseQueryParser for some impressive
> matching capabilities.
>
> Of concern is that Wildcard Queries,
I am getting below exception, while adding documents. I am adding documents
continously and at some point, i am getting the below exception. This exception
is not occuring with v2.9.0
Exception: Index: 21, Size: 2
java.util.ArrayList.RangeCheck(Unknown Source)
java.util.ArrayList.get(Unknown
We are relying on the ComplexPhraseQueryParser for some impressive matching
capabilities.
Of concern is that Wildcard Queries, of the form "quality operations providing
quality food services job requirements: click here to apply for this job*", for
instance, take 2-5 seconds to execute and requ
This is maybe something I am looking for. We are using the default value, which
is true.
Let me examine this method more.
Thanks for your help.
> From: digyd...@gmail.com
> To: java-user@lucene.apache.org
> Subject: RE: During the wild card search, will lucene 2.9.0 to convert the
> search st
Did you try queryParser.SetLowercaseExpandedTerms(false)?
DIGY
-Original Message-
From: java8964 java8964 [mailto:java8...@hotmail.com]
Sent: Monday, February 01, 2010 8:11 PM
To: java-user@lucene.apache.org
Subject: RE: During the wild card search, will lucene 2.9.0 to convert the
searc
QueryParser has a special capability to lowercase wildcard and prefix
queries, simply because they are not passed to an analyzer. Term
queries, phrase queries (like your example), etc are passed on to the
analyzer. You are using the KeywordAnalyzer for the title field, and
thus it is not
I would like to confirm your reply. You mean that the query parse will lower
casing. In fact, it looks like that it only does this for wild card query,
right?
For the term query, it didn't. As proved by if you change the line to:
Query query = new QueryParser("title", wrapper).pars
Only query parser does the lower casing. For such a special case, I would
suggest to use a PrefixQuery or WildcardQuery directly and not use query parser.
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de
> -Original Message-
> From:
I noticed a strange result from the following test case. For wildcard search,
my understanding is that lucene will NOT use any analyzer on the query string.
But as the following simple code to show, it looks like that lucene will lower
case the search query in the wildcard search. Why? If not,
Hello Erik, Mark and Phan:
Thanks all of you for the reply, I'll check all of you have said.
greetings,
Carmen
--
View this message in context:
http://old.nabble.com/Search-for-more-than-one-term-tp27348933p27406592.html
Sent from the Lucene - Java Users mailing list archive at Nabble.com
Sounds like a job for near realtime search aka NRT.
Take a look at IndexWriter.getReader().
http://wiki.apache.org/lucene-java/NearRealtimeSearch
http://www.lucidimagination.com/blog/2009/04/10/real-time-search-with-lucene/
And more with the help of your favourite search engine.
--
Ian.
On Mo
Hi, I am new to use lucene, I have a query string of multiple terms. i) i want
to return query string by removing stop words and stemmed version of the query.
ii) second i want to get tf and idf of each term in a query, how to get it?
Asif
_
Hi,
I want to search an index and at the same time continue to my indexing.
ParallelReader doesn't solve my problem.
It is obvious that I am not searching multiple indexes at the same time.
How can I build a document based lock, more over
I don't want to open and close and index every time whil
Hmm
My Analyzer is a Dictionary-based Analyzer. And so, it only recognizes
tokens in its dictionary. Adding every url (or domain) is not a viable
solution.
So how could I include that to my analyzer? Lucene Filter? FilterReader?
Thanks,
--
Franz Allan Valencia See | Java Software Engineer
Please read Uwe's answers again. I think he has already answered all
your questions.
The javadocs for 2.9.1 are very useful when upgrading to 3.0.0. Read
the entry for Field.Store.COMPRESS.
--
Ian.
On Mon, Feb 1, 2010 at 12:04 PM, Suraj Parida wrote:
>
> Uwe,
>
> Thanks for the reply.
>
>
Uwe,
Thanks for the reply.
I am confused with
document.add(new Field(key, value, Field.Store.COMPRESS,
Field.Index.ANALYZED));
My requirement is also the same, but how can i do it in 3.0 ?
i thought CompressionTools would be used for compression.
Basically i need to compress the text
I forget:
To also index those fields, add it a second time with only index enabled and
same name:
String value = "Some large text .. ";
byte[] valuesbyte = CompressionTools.compress(value.getBytes());
Field f = new Field(key, valuesbyte, Field.Store.YES);
Document.add(field); // the stored o
Compression is only used for *stored* fields. For indexing there is no
compression available (how should that work). You must clearly differentiate
between stored and indexed fields!
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de
> -O
Hi,
I want to compress a text field (due to its large size and spaces), during
indexing.
I am unable to get the same also want to search.
My code during compressing is as follows:
String value = "Some large text .. ";
byte[]
Have you considered the function query stuff?
oal.search.function.CustomScoreQuery and friends.
If you provide your own CustomScoreQuery implementation you can do
scoring however you like.
--
Ian.
On Mon, Feb 1, 2010 at 7:08 AM, Dennis Hendriksen
wrote:
> Hi Steve,
>
> Thank you for your sugg
If you make com a stop word then you won't be able to search for it,
but a search for fubar should have worked. Are you sure your analyzer
is doing what you want? You don't tell us what analyzer you are
using.
Tips:
use Luke to see what has been indexed
read the FAQ entry
http://wiki.apache.
> ...
> Is there some convenient way to compare Lucene Documents?
Not that I know of.
> I want to check if I should update a document based on whether field values
> have changed and whether fields have been added or removed.
>
> Is it as simple as:
>
> newDoc.equals(oldDoc)
No!
> I don't need
23 matches
Mail list logo