Right now, you can't really do anything about it. In the future, with the new FieldCache API that may go in, you could plug in a custom implementation that makes tradeoffs for a sparse array of some kind. The docid is currently the index into the array, but with a custom impl you may be able to use a sparse array object. Thats a ways off though.
- Mark On Mon, Jul 20, 2009 at 8:38 AM, Ganesh <emailg...@yahoo.co.in> wrote: > Any ideas on this?? > > Regards > Ganesh > > ----- Original Message ----- > From: "Ganesh" <emailg...@yahoo.co.in> > To: <java-user@lucene.apache.org> > Sent: Friday, July 17, 2009 2:42 PM > Subject: Sorting field contating NULL values consumes field cache memory > > > I am doing sorting on DateTime with minute resolution. I am having 90 > million of records and sorting is consuming nearly 500 MB. 30% records are > not part of primary result set and they don't have sort field. But field > cache memory (4 * IndexReader.maxDoc() * (# of different fields actually > used to sort)) is consumed eventhough 30% of records are not part of sort. > > I want to avoid the 30% of records not to be loaded in field cache. How > could i achieve this. Any idea are greatly appreciated? > > Regards > Ganesh > Send instant messages to your online friends http://in.messenger.yahoo.com > > --------------------------------------------------------------------- > To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org > For additional commands, e-mail: java-user-h...@lucene.apache.org > > Send instant messages to your online friends http://in.messenger.yahoo.com > > --------------------------------------------------------------------- > To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org > For additional commands, e-mail: java-user-h...@lucene.apache.org > > -- -- - Mark http://www.lucidimagination.com