Can you describe how you change the index?  EG are you committing very
frequently?

It's odd to get 1000+ files in the index in 10 minutes unless you are
committing frequently.

If so, you may need a smarter deletion policy that stays in touch w/
the readers to know precisely which commit point they are using so
that only those ones (and not every commit point < 80 minutes old)
need to be saved.

Mike

On Tue, Apr 21, 2009 at 3:01 PM, Harini Raghavan
<harini.ragha...@insideview.com> wrote:
> Hi Everyone,
>
> We are planning to distribute searches on the index and have a single 
> indexing node. We want to mount the index on NFS so that it can be shared by 
> the indexer and searcher nodes. To optimize several of our search workflows, 
> we are caching the IndexSearcher and refreshing it every hour. Also to 
> improve the performance of some complex workflows, we are caching the Lucene 
> document ids. We are trying to set the deletion policy to 
> ExpirationTimeDeletionPolicy to make use of the Lucene's NFS support. But 
> since our IndexSearcher and doc id cache are refreshed only every 60 mins, we 
> set the expiration time to 80 mins. But this resulted in too many files being 
> created in the index directory. We got more than 1000 files in the index in 
> less than 10 mins.
>
> This makes me wonder if the expiration time deletion policy fix will work in 
> our scenario. Should we be going fo some other solution to use NFS store?
>
> Thanks,
> Harini
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org

Reply via email to