normalization (filtering of tokens).
>
> Uwe
>
> -
> Uwe Schindler
> Achterdiek 19, D-28357 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
> > -Original Message-
> > From: Jacek Grzebyta [mailto:grzebyta@gmail.com]
> > Se
s all about how you
> tokenize/analyze your text. Once you decided, you can create a two versions
> of a single field using different analysers.This allows you to assign
> different weights to those field at query time.
> Ahmet
>
>
> On Thursday, June 8, 2017, 2:56:37 PM GMT+3, J
Hi,
Apologies for repeating question from IRC room but I am not sure if that is
alive.
I have no idea about how lucene works but I need to modify some part in
rdf4j project which depends on that.
I need to use lucene to create a mapping file based on text searching and I
found there is a followi