inal search.
>
> I'd really appreciate some advice on what is going on with the ngram
> filter.
>
> Thanks
>
Otis Gospodnetic wrote:
>
> This actually sounds bugish to me, but you removed the text from your
> original email, so I don't know what context t
Thanks for the pointer.
I've gone into this in some depth, using the AnalyzerUtils class from the
lucene in action book.
It seems that the NGramTokenFilter is only processing part of the string
that goes in. It stops tokenising the words part way through. That's why the
documents weren't found i
Hi,
I'd appreciate if someone could explain the results I'm getting.
I've written a simple custom analyzer that applies the NGramTokenFilter to
the token stream during indexing. It's never applied during searching. The
purpose of this is to match sub-words.
Without the ngram filter, if I search