normalization (filtering of tokens).
>
> Uwe
>
> -
> Uwe Schindler
> Achterdiek 19, D-28357 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
> > -Original Message-
> > From: Jacek Grzebyta [mailto:grzebyta@gmail.com]
> > Se
ng of tokens).
Uwe
-
Uwe Schindler
Achterdiek 19, D-28357 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de
> -Original Message-
> From: Jacek Grzebyta [mailto:grzebyta@gmail.com]
> Sent: Friday, June 9, 2017 1:39 PM
> To: java-user@lucene.apache.org
> Subject: Re
Hi Ahmed,
That works! Still I do not understand how that staff working. I just know
that analysed cut an indexed text into tokens. But I do not know how the
matching is done.
Do you recommend and good book to read. I prefer something with less maths
and more examples?
The only I found is free "An
Hi,
You can completely ban within-a-word search by simply using WhitespaceTokenizer
for example.By the way, it is all about how you tokenize/analyze your text.
Once you decided, you can create a two versions of a single field using
different analysers.This allows you to assign different weights