18, 2009 10:15:12 PM
> Subject: n-gram word support
>
> Hey,
>
> I was wondering if there is a way to read the index and generate n-grams of
> words for a document in lucene? I am quite new to it and am using py
The contrib/analyzers has several n-gram based tokenization and token
filter options.
On Jun 18, 2009, at 10:15 PM, Neha Gupta wrote:
Hey,
I was wondering if there is a way to read the index and generate n-
grams of
words for a document in lucene? I am quite new to it and am using
pylucen
Yeah, look at the spellcheck component in Solr. They are doing something
similar.
Sameer.
On Thu, Jun 18, 2009 at 7:15 PM, Neha Gupta wrote:
> Hey,
>
> I was wondering if there is a way to read the index and generate n-grams of
> words for a document in lucene? I am quite new to it and am using
Hey,
I was wondering if there is a way to read the index and generate n-grams of
words for a document in lucene? I am quite new to it and am using pylucene.
Thanks,
Neha