Hi

24Gb RAM for a 100Gb index is likely to be plenty.  You don't have a
huge amount of control over what lucene loads in memory, but take a
look at termInfosIndexDivisor in IndexReader.  And I believe that
omitting field norms (Field.setOmitNorms) may help too.  Googling for
"lucene memory usage" or equivalent will find you some info.

Are you using lucene sorting?  That can use a lot of memory - see
recent threads on this list.


How many queries per second?  Loads ...  Try it out yourself with your
searches on your index on your hardware.  And read
http://wiki.apache.org/lucene-java/ImproveSearchingSpeed

Good luck.


--
Ian.


On Wed, Dec 23, 2009 at 5:27 AM, Shakti Purohit
<shakti_puro...@persistent.co.in> wrote:
> We are required to find out how much percentage/part of lucene index needs to 
> be in memory for acceptable search response time. The index size we have is 
> around 100 GB while the available memory is 24 GB. Since we do not have the 
> option of loading whole of the index in memory we wanted to know what minimum 
> part of lucene index be loaded in memory so that response time is not 
> affected.
> Does the index consist of any files or hierarchy such that loading only this 
> file/information in memory and not whole of the index would suffice for 
> faster response time.
>
> The other question I have is how many queries per second lucene can support? 
> We are interested in finding out throughput of the system.
>
> Thanks,
> Shakti
>

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org

Reply via email to