That depends on what you are trying to do.
You could create the StandardAnalyzer and pass in your own stop word set, but
that would use that stop word set for all your analyzed fields.
There is a PerFieldAnalyzerWrapper (I think that is the name) where you can set
up different analyzers
Philip Puffinburger wrote:
>
> I'm going to take a guess that you are using the StandardAnalyzer or
> another analyzer that removes stop words. 'a' is a stop word so is
> removed.
>
> On Jan 4, 2010, at 11:55 PM, sqzaman wrote:
>
>>
>> hi
>> i am using Java Lucene 2.9.1
>> my problem is Wh
I'm going to take a guess that you are using the StandardAnalyzer or another
analyzer that removes stop words. 'a' is a stop word so is removed.
On Jan 4, 2010, at 11:55 PM, sqzaman wrote:
>
> hi
> i am using Java Lucene 2.9.1
> my problem is When i parse the folowing query
> name: zaman AND
hi
i am using Java Lucene 2.9.1
my problem is When i parse the folowing query
name: zaman AND name:15 name:A
just last "A" skiped after parsing
i found
query = (+name: zaman +name:15)
why A is missing
can anybody tell me the reason?
need quick feedback
--
View this message in context:
http://
Probably best to ask on tika-u...@lucene.apache.org
On Jan 4, 2010, at 7:34 PM, Baldwin, David wrote:
> I need to get a handle on how much memory Tika needs to token-ize different=
> file types. In other words, I need to find information on required overhe=
> ad (including copies of buffers
I need to get a handle on how much memory Tika needs to token-ize different=
file types. In other words, I need to find information on required overhe= ad
(including copies of buffers made if applicable) so that I can produce s= ome
kind of guidelines for memory possibly needed by users of the
Thanks for checking this out!
So my research was fine and I fixed it the intuitive and in my opinion
"correct way" (not with such hacks like using the thread's class loader you
see so often in the internet, but which are contraprodutive because they
often break the Java security model or break cla
Sorry for this delay. I was having a silly problem compiling solr but I
figured it out.
I tested it and it worked correctly. Thanks
On Wed, Dec 30, 2009 at 8:31 PM, Uwe Schindler wrote:
> That would be good, if you could test it!
>
> Please checkout Lucene 2.9 branch from svn
> (http://svn.apach
Hi Uwe,
I implemented the changes you suggested. The index size reduced a lot
because of the higher precision value but the range query performance is
still slow especially for lots of matches. Also, I am indexing two fields
now docdatetime (keep the time portion with msec precision) and docdate
(
OK I've disabled this test for now, and killed the build.
Mike
On Mon, Jan 4, 2010 at 6:37 AM, Michael McCandless
wrote:
> I just kill -QUIT'd it. It's again in the TestBGSearchTaskThreads...
> somehow the lower priority is not carrying through to the search
> threads. I'll dig.
>
> Mike
>
> O
Alas, this is a bug in CustomScoreQuery. I've opened this:
https://issues.apache.org/jira/browse/LUCENE-2190
With Lucene 2.9, we now search one segment at a time. So the rollback
to 0 that you're seeing is in fact due to a new segment being
searched. We need to fix CustomScoreQuery to notify
I just kill -QUIT'd it. It's again in the TestBGSearchTaskThreads...
somehow the lower priority is not carrying through to the search
threads. I'll dig.
Mike
On Mon, Jan 4, 2010 at 3:05 AM, Uwe Schindler wrote:
> Just for info. Maybe we can this time get a stack trace.
>
> -
> Uwe Schindle
Just for info. Maybe we can this time get a stack trace.
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de
-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
13 matches
Mail list logo