Guys,

I have a question on normalisation.
I am using a prehistoric version of lucene; 2.0.0

Context: http://markmail.org/message/z5lcz2htjvqsscam

I have these two scenario with indexes.
One:
2 indexes; One with documents that has a field "field_one" instantiated as
TOKENIZED and then setOmitNorms(true).
Second index with the same field "field_one" instantiated TOKENIZED and then
setBoost(boost_factor);

Two:
2 indexes; One with field "field_one" instantiated as TOKENIZED and then
setOmitNorms(true).
Second index with the same field "field_one", but some of them are as first
index and some were instantiated TOKENIZED and then setBoost(boost_factor);
A mix of the two types.

If I am to load these two indexes into an IndexReader and query on the field
"field_one" would there be any issues that I should expect?

Any advice is much appreciated.

Cheerio

Reply via email to