Re: Penalize fact the searched term is within a world

2017-06-09 Thread Jacek Grzebyta
normalization (filtering of tokens). > > Uwe > > - > Uwe Schindler > Achterdiek 19, D-28357 Bremen > http://www.thetaphi.de > eMail: u...@thetaphi.de > > > -Original Message- > > From: Jacek Grzebyta [mailto:grzebyta@gmail.com] > > Se

RE: Penalize fact the searched term is within a world

2017-06-09 Thread Uwe Schindler
ng of tokens). Uwe - Uwe Schindler Achterdiek 19, D-28357 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de > -Original Message- > From: Jacek Grzebyta [mailto:grzebyta@gmail.com] > Sent: Friday, June 9, 2017 1:39 PM > To: java-user@lucene.apache.org > Subject: Re

Re: Penalize fact the searched term is within a world

2017-06-09 Thread Jacek Grzebyta
Hi Ahmed, That works! Still I do not understand how that staff working. I just know that analysed cut an indexed text into tokens. But I do not know how the matching is done. Do you recommend and good book to read. I prefer something with less maths and more examples? The only I found is free "An

Re: Penalize fact the searched term is within a world

2017-06-08 Thread Ahmet Arslan
Hi, You can completely ban within-a-word search by simply using WhitespaceTokenizer for example.By the way, it is all about how you tokenize/analyze your text. Once you decided, you can create a two versions of a single field using different analysers.This allows you to assign different weights