Thanks for the ideas.
We are testing out the methods and changes suggested to see if they
work with our current set up, and are checking if the disks are the
bottleneck in this case, but feel free to drop more hints. :)
At the moment we are copying the index at an offpeak hour, but we
would also
lucene.apache.org
Sent: Tuesday, April 3, 2007 10:40:16 AM
Subject: Index updates between machines
We are running a search service on the internet using two machines. We
have a crawler machine which crawls the web and merges new documents
found into the Lucene index. We have a searcher machine which a
<[EMAIL PROTECTED]>
To: java-user@lucene.apache.org
Sent: Tuesday, April 3, 2007 10:40:16 AM
Subject: Index updates between machines
We are running a search service on the internet using two machines. We
have a crawler machine which crawls the web and merges new documents
found into the Lucene
We are running a search service on the internet using two machines. We
have a crawler machine which crawls the web and merges new documents
found into the Lucene index. We have a searcher machine which allows
users to perform searches on the Lucene index.
Periodically, we would copy the newest ve
Hi CW,
You might find this email from Doug Cutting useful, not NFS but using rsync
and hard links ... besides NFS without failover introduces a single point of
faliure.
http://www.mail-archive.com/lucene-user@jakarta.apache.org/msg12709.html
Regards,
Dan
On 4/3/07, Chun Wei Ho <[EMAIL PROTECT