That's right, just make your own analyzer, forked from
StandardAnalyzer, and change out the StopFilter. The analyzer is a
tiny class and this (creating your own components in an analyzers) is
normal practice...
Mike McCandless
http://blog.mikemccandless.com
On Sat, Jan 28, 2017 at 6:09 AM, Gre
...or use CustomAnalyzer then you don't need to subclass. Just decare the
components.
Uwe
-
Uwe Schindler
Achterdiek 19, D-28357 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de
> -Original Message-
> From: Michael McCandless [mailto:luc...@mikemccandless.com]
> Sent: Sunday, J
Mike,
Many thanks, it works perfectly now.
Cheers Greg
On 29 January 2017 at 11:28, Michael McCandless
wrote:
> That's right, just make your own analyzer, forked from
> StandardAnalyzer, and change out the StopFilter. The analyzer is a
> tiny class and this (creating your own components in an
Wonderful, thank you for bringing closure! Stop words and analyzing
suggesters are a tricky combo ...
Mike McCandless
http://blog.mikemccandless.com
On Sun, Jan 29, 2017 at 6:37 AM, Greg Huber wrote:
> Mike,
>
> Many thanks, it works perfectly now.
>
> Cheers Greg
>
> On 29 January 2017 at 11
Uwe,
>...or use CustomAnalyzer then you don't need to
> subclass. Just decare the components.
If I need the StandardAnalyzer code (marked final) and this extends
StopwordAnalyzerBase, how would I do this?
Cheers Greg
On 29 January 2017 at 11:32, Uwe Schindler wrote:
> ...or use CustomAnalyzer
Hi,
CustomAnalyzer is a very generic thing. It has a builder that you can use to
configure your analyzer. You can define which Tokenizer, which StopFilter (and
pass stop words as you like), add stemming. No, it does not subclass
StopWordAnalyzerBase, but that is also not needed, because it has
Uwe,
Perfect, exactly what I was looking for. No duplication and no on going
maintenance (as using defaults) :-)
return CustomAnalyzer.builder()
.withTokenizer(StandardTokenizerFactory.class)
.addTokenFilter(StandardFilterFactory.class)
.addTokenFilter(LowerCaseFilterFactory.class)
.addTokenFilt
Hi Chris, this "null_1" token is unexpected to me. Did you reindex with
Lucene 4.10.3 or just upgrade to the new file format using merging? Can you
also share your analysis chain?
Le sam. 28 janv. 2017 à 12:22, Chris Bamford
a écrit :
> Hello
>
> I am in the process of moving from indexing with
As far as I remember, this is a Luke related display bug (the nulls in output
on position increments greater than 1).
The phrase query question is unrelated to the display bug. This is documented
in the migration guide.
The problem is that old indexes can't play with new phrase query and query
Hello, Sorry it's the .NET port, but hopefully someone could advise if I'm
using the API in correct way. This is originally taken from a stack
overflow question from another user, but I have a very similar data set and
search requirements.
I'm trying to index blog posts and comments in a way whi
10 matches
Mail list logo