"larry hughes" <[EMAIL PROTECTED]> wrote:

> I'm pondering on long term maintenance issues with Lucene indexes
> and would like to know of anyone's suggestions or recommendations to
> backing up these indexes.  My goal is to have a weekly, or even
> daily, snapshot of the current index to make sure it is recoverable
> if the index gets corrupted.  I won't be able to reindex since my
> database contains millions of records so reindexing on-the-fly is
> not an option.  Also the index size is growing fast--already at the
> 56GB mark--that I'm not even sure creating a snapshot copy is fast
> enough.  Maybe clustering is better?

One effective way to backup the index is to copy only the new files.
Since Lucene is write once (as of 2.1), you only need to back up any
new file names that have appeared since your last backup.  You can
also remove all now-deleted filenames if you are only interested in
the most recent snapshot.

Normally you must pause indexing to do this backup (since filenames
are changing and/or being deleted) but it's possible with the trunk
version of Lucene to make a simple index deletion policy that would
allow you to run a backup slowly in the background without pausing
indexing.

Basically this deletion policy would "keep alive" the one commit point
that was current when you started your backup, so even as the index is
changing, all segment files referenced by that commit point would not
be deleted; then your backup would copy the files referenced by that
commit point.  Once the backup completes then you would allow that
commit point to be deleted.

This would allow you to do "live" backups (backup while indexing is
still happening).

Mike
-- 
  Michael McCandless
  [EMAIL PROTECTED]


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to