> -----Original Message-----
> From: David Bolen [mailto:[EMAIL PROTECTED]]
> Sent: viernes, 23 de febrero de 2001 20:33
> To: Nemholt, Jesper Frank
> Cc: [EMAIL PROTECTED]
> Subject: RE: Backing up *alot* of files
>
>
> Nemholt, Jesper Frank [[EMAIL PROTECTED]] writes:
>
> > Now the big question : How long will next run take (most
> likely, only a
> few
> > files has changed) ?
>
> You'll need the same basic startup time (and memory) to
> identify the file
> list,
> but at that point it should be quite fast at skipping to only
> the files that
> need to be transferred (providing you let it identify such
> files by size and
> timestamp - the default operation).
That was also what I was hoping, but what if I add the -c for checksum.... I
suppose it then needs to read & checksum both source and destination for all
files, or ? (this will as far as I can see take at least the 10 hours, maybe
more).
I don't think checksum is a necessity here, but when dealing with files
including production database files from Oracle, it _is_ nice to play
safe...
We plan to let the DBAs fire up the databases on the backup and check
everything. If they say OK, the most important files are OK.
> However, I'm not sure I follow what you are currently running
> - are you
> using rsync to sort of "bootstrap" your backup repository?
> If that's the
> case, then it can be more efficient to just transfer the files via a
> standard
> copy mechanism (you don't have any of the overhead of rsync
> at all) or use
> rsync with the -W (whole file, no incremental computations)
> option that very
> first time.
Yes, I would probably save the first hour, but it was done with rsync & the
normal options nevertheless, just to see if anything went wrong when using
rsync on 2 million files.
--
Un saludo / Venlig hilsen / Regards
Jesper Frank Nemholt
Unix System Manager
Compaq Computer Corporation
Phone : +34 699 419 171
E-Mail: [EMAIL PROTECTED]