The data is pretty varied. Some documents are very small (order of a few k)
while others can go over a few MBs. There are 20 fields created in the index
currently. Half the fields use StandardAnalyzer, and half use a
WhitespaceTokenizer coupled with a LowerCaseFilter.
The benchmark reads 1000 docu
hey,
can you share your benchmark and/or tell us a little more about how your
data looks like and how you analyze the data. There might be analysis
changes that contribute to that?
simon
On Sun, Jul 14, 2013 at 7:56 PM, cischmidt77 wrote:
> I use Lucene/MemoryIndex for a large number of quer