Hi Guys,
We have an index/query server that contains several thousand fairly hefty 
indexes. Each searcher is shared between many 'user-threads' and once opened we 
keep the searcher in a cache which is refreshed depending on how often it is 
used. Due to memory limitations on the server, we need some kind of LRU 
mechanism to drop unused searchers to make way for newer ones.
We are seeing load spikes when we get hit by queries that try to open several 
non-cached searches at the same (or at least a small delta) time. This looks to 
be the disks struggling to open all the appropriate files for that period, and 
it takes a little while for the server to return to normal operating limits 
thereafter.
Given that upgrading hardware/memory is not currently an option, we need a way 
to smooth over these spikes, even if it is at the cost of slowing query 
performance overall. 

It strikes me that if we could cache all of our searchers on the machine (ie 
have all of our indexes 'open for business'), possibly having to alter kernel 
parameters to cater for the large number of file handles, without caching many 
query results, this might solve the problem, without pushing memory usage too 
high. Also, the higher number of searchers stored in the heap is going to steal 
space from the lucene filecache so is there a recommended mechanism for doing 
this?
So is there a way to mimimize the searcher cache memory footprint to possibly 
keep more of them in memory, at the cost of storing less data?
Any insight would be most appreciated.
ThanksClive

  

Reply via email to