The simplest way would be a CollectorDelegate that wraps an existing
collector and checks a boolean before calling the delegates collect
method.
simon
On Mon, May 23, 2011 at 8:09 AM, liat oren wrote:
> Thank you very much.
>
> So the best solution would be to implement the collector with a stop
Thank you very much.
So the best solution would be to implement the collector with a stop
function.
Do you happen to have an example for that?
Many thanks,
Liat
On 22 May 2011 13:19, Simon Willnauer wrote:
> On Sun, May 22, 2011 at 4:48 PM, Devon H. O'Dell
> wrote:
> > I have my own collector,
1. source string: 7
2. WhitespaceTokenizer + EGramTokenFilter
3. FastVectorHighlighter,
4. debug info: subInfos=(777((8,11))777((5,8))777((2,5)))/3.0(2,102),
srcIndex is not correctly computed for the second loop of the outer for-loop
2011/5/23 Weiwei Wang
> the following code has a bug
Hi WeiWei,
Thanks for the report.
Can you provide a self-contained unit test that triggers the bug?
Thanks,
Steve
> -Original Message-
> From: Weiwei Wang [mailto:ww.wang...@gmail.com]
> Sent: Monday, May 23, 2011 1:25 AM
> To: java-user@lucene.apache.org
> Subject: FastVectorHighlight
the following code has a bug of StringIndexOutofBounds when multiple matched
terms need highlight
private String makeFragment( WeightedFragInfo fragInfo, String src, int s,
String[] preTags, String[] postTags, Encoder encoder ){
StringBuilder fragment = new StringBuilder();
int srcIn
On Sun, May 22, 2011 at 4:48 PM, Devon H. O'Dell wrote:
> I have my own collector, but implemented this functionality by running
> the search in a thread pool and terminating the FutureTask running the
> job if it took longer than some configurable amount of time. That
> seemed to do the trick for
I have my own collector, but implemented this functionality by running
the search in a thread pool and terminating the FutureTask running the
job if it took longer than some configurable amount of time. That
seemed to do the trick for me. (In my case, the IndexReader is
explicitly opened readonly,
You're welcome!
Mike
http://blog.mikemccandless.com
On Sun, May 22, 2011 at 9:20 AM, zhoucheng2008 wrote:
> Great, thanks Mike.
>
> -Original Message-
> From: Michael McCandless [mailto:luc...@mikemccandless.com]
> Sent: Sunday, May 22, 2011 8:09 PM
> To: java-user@lucene.apache.org
> S
Great, thanks Mike.
-Original Message-
From: Michael McCandless [mailto:luc...@mikemccandless.com]
Sent: Sunday, May 22, 2011 8:09 PM
To: java-user@lucene.apache.org
Subject: Re: How to create document objects in our case
Norms is how Lucene records what the apriori boost is for each
doc
Norms is how Lucene records what the apriori boost is for each
docXfield. This boost is the product of per-field boost, per-doc
boost (both of which your app would set when it creates the doc), as
well as the "length normalization" Lucene's default similarity applies
(shorter docs have higher boos
Mike, thanks for reply.
Can you please elaborate a little bit more on " If you don't need norms
(don't boost, lengths don't vary much or you
don't care to have field length impact scoring) you can omit norms"?
When do you expect the handling of nested document will be applicable?
Cheng
-Or
you can impl. you own collector and notify the collector to stop if you need to.
simon
On Sun, May 22, 2011 at 12:06 PM, liat oren wrote:
> Hi Everyone,
>
> Is there a way to stop a multi search in the middle?
>
> Thanks a lot,
> Liat
>
---
30 fields is fine, but if they are all indexed you should watch out
for memory usage. Ie, norms require 1 byte per doc per indexed field.
If you don't need norms (don't boost, lengths don't vary much or you
don't care to have field length impact scoring) you can omit norms.
The relationship b/w
Hi Everyone,
Is there a way to stop a multi search in the middle?
Thanks a lot,
Liat
14 matches
Mail list logo