On Mon, Jan 07, 2002 at 04:11:57PM +0800, Patrick Hsieh wrote:
> > > But when I ssh from debianclient to backupserver, it gives me a password
> > > prompt,, so I enter the password, then rsync begins.
> >
> > and ?
> >
> Thanks for your patience.
> My question is, since I expect automated rsync b
On Mon, Jan 07, 2002 at 04:11:57PM +0800, Patrick Hsieh wrote:
> > > But when I ssh from debianclient to backupserver, it gives me a password
> > > prompt,, so I enter the password, then rsync begins.
> >
> > and ?
> >
> Thanks for your patience.
> My question is, since I expect automated rsync
> On Mon, Jan 07, 2002 at 03:03:12PM +0800, Patrick Hsieh wrote:
> > > - obviously this doesn't preclude a bad guy checking out
> > > backup-server:backups/otherhostname (use ssh keys, and invoke cmd="cd
> > > backups/hostname; rsync with whatever daemon options" will limit that)
> > Now
On Mon, Jan 07, 2002 at 03:03:12PM +0800, Patrick Hsieh wrote:
> > - obviously this doesn't preclude a bad guy checking out
> > backup-server:backups/otherhostname (use ssh keys, and invoke cmd="cd
> > backups/hostname; rsync with whatever daemon options" will limit that)
> Now I know how
> On Tue, Jan 01, 2002 at 08:39:39AM -0500, Keith Elder wrote:
> > This brings up a question. How do you rsync something but keep the
> > ownership and permissions the same. I am pulling data off site nightly
> > and that works, but the permissions are all screwed up.
>
> rsync -avxrP --delete $
> > 3) Add this to authorized_keys for the above account, specifying the
> > command that logins with this key are allowed to run. See command="" in
> > sshd(1).
>
> I can't find the document about this section, can you show me
> some reference or examples? Many thanks.
man sshd, down the botto
> On Mon, Jan 07, 2002 at 03:03:12PM +0800, Patrick Hsieh wrote:
> > > - obviously this doesn't preclude a bad guy checking out
> > > backup-server:backups/otherhostname (use ssh keys, and invoke cmd="cd
> > > backups/hostname; rsync with whatever daemon options" will limit that)
> > Now
On Mon, Jan 07, 2002 at 03:03:12PM +0800, Patrick Hsieh wrote:
> > - obviously this doesn't preclude a bad guy checking out
> > backup-server:backups/otherhostname (use ssh keys, and invoke cmd="cd
> > backups/hostname; rsync with whatever daemon options" will limit that)
> Now I know ho
> On Tue, Jan 01, 2002 at 08:39:39AM -0500, Keith Elder wrote:
> > This brings up a question. How do you rsync something but keep the
> > ownership and permissions the same. I am pulling data off site nightly
> > and that works, but the permissions are all screwed up.
>
> rsync -avxrP --delete
> > 3) Add this to authorized_keys for the above account, specifying the
> > command that logins with this key are allowed to run. See command="" in
> > sshd(1).
>
> I can't find the document about this section, can you show me
> some reference or examples? Many thanks.
man sshd, down the bott
> 3) Add this to authorized_keys for the above account, specifying the
> command that logins with this key are allowed to run. See command="" in
> sshd(1).
I can't find the document about this section, can you show me
some reference or examples? Many thanks.
--
Patrick Hsieh <[EMAIL PROTECTED]>
> 3) Add this to authorized_keys for the above account, specifying the
> command that logins with this key are allowed to run. See command="" in
> sshd(1).
I can't find the document about this section, can you show me
some reference or examples? Many thanks.
--
Patrick Hsieh <[EMAIL PROTECTED]>
[cc: trimed to something a little more sane]
On Wed, Jan 02, 2002 at 04:21:33PM -0500, [EMAIL PROTECTED] wrote:
> We're pulling **from** a read-only rsyncd. It has to run as root because we
> require the right archive, permissions, etc I'm confused; is that much
> different from running an
On Tue, Jan 01, 2002 at 02:28:28PM +0800, Jason Lim wrote:
> Hi all,
>
> What do you think would be the best way to duplicate a HD to another
> (similar sized) HD?
>
I've been using tar on my system. Works excellent. no downtime, and all
permissions are maintained.
--
Nick Jennings
On Wed, Jan 02, 2002 at 10:17:38AM -0800, Ted Deppner wrote:
> > The [modules] in rsyncd.conf provide a nice way to package what you want to
> > back up. You can also specify what ip addresses connect to rsyncd. So in
> > theory only the backup machine can connect to the rsyncd daemons; we've se
[cc: trimed to something a little more sane]
On Wed, Jan 02, 2002 at 04:21:33PM -0500, [EMAIL PROTECTED] wrote:
> We're pulling **from** a read-only rsyncd. It has to run as root because we
> require the right archive, permissions, etc I'm confused; is that much
> different from running an
On Tue, Jan 01, 2002 at 02:28:28PM +0800, Jason Lim wrote:
> Hi all,
>
> What do you think would be the best way to duplicate a HD to another
> (similar sized) HD?
>
I've been using tar on my system. Works excellent. no downtime, and all
permissions are maintained.
--
Nick Jennings
--
On Wed, Jan 02, 2002 at 10:17:38AM -0800, Ted Deppner wrote:
> > The [modules] in rsyncd.conf provide a nice way to package what you want to
> > back up. You can also specify what ip addresses connect to rsyncd. So in
> > theory only the backup machine can connect to the rsyncd daemons; we've s
On Wed, Jan 02, 2002 at 09:19:11AM -0500, [EMAIL PROTECTED] wrote:
> Automation with keys stored on machines is better than doing it manually
> and forgetting to back up. :-)
Agreed. Like excercise, the kind you do is better than the kind you
don't.
> It **does** provide a path by which someone
On Wed, Jan 02, 2002 at 09:19:11AM -0500, [EMAIL PROTECTED] wrote:
> Automation with keys stored on machines is better than doing it manually
> and forgetting to back up. :-)
Agreed. Like excercise, the kind you do is better than the kind you
don't.
> It **does** provide a path by which someon
ssh-agent does help here. Have the cron job which is doing the backup
look to see if there's an ssh agent running as its user (presumably
'backup', maybe root) and if not send mail to somebody's pager,
complaining about the missing agent. If the agent is running, the
cron job can reconnect to it
On Wed, Jan 02, 2002 at 03:35:43PM +0800, Patrick Hsieh wrote:
> OK. My problem is, if I use rsync+ssh with blank passphrase among
> servers to automate rsync+ssh backup procedure without password prompt,
> then the cracker will not need to send any password as well as
> passphrase when ssh login o
ssh-agent does help here. Have the cron job which is doing the backup
look to see if there's an ssh agent running as its user (presumably
'backup', maybe root) and if not send mail to somebody's pager,
complaining about the missing agent. If the agent is running, the
cron job can reconnect to i
On Wed, Jan 02, 2002 at 03:35:43PM +0800, Patrick Hsieh wrote:
> OK. My problem is, if I use rsync+ssh with blank passphrase among
> servers to automate rsync+ssh backup procedure without password prompt,
> then the cracker will not need to send any password as well as
> passphrase when ssh login
Hello All
I am not sure that I understand what the original poster wishes to
achieve, nor have I followed the lengthy discussions that ensued.
But, a thread with the above subject line would not be complete
without a mention of "mirrordir".
Someone wrote:
> > Sigh... and I was hoping for a si
Hello All
I am not sure that I understand what the original poster wishes to
achieve, nor have I followed the lengthy discussions that ensued.
But, a thread with the above subject line would not be complete
without a mention of "mirrordir".
Someone wrote:
> > Sigh... and I was hoping for a s
> OK. My problem is, if I use rsync+ssh with blank passphrase among servers
> to automate rsync+ssh backup procedure without password prompt, then the
> cracker will not need to send any password as well as passphrase when ssh
> login onto another server, right?
No, password and rsa/dsa authenti
Hello Ted,
Your mail is very informative to me.
I wonder how to define cmd to run automatically in authorized_hosts?
I thought there's nothing but pub keys in authorized_hosts file.
And, do I need ssh-agent in this case? Do I need to leave passphrase
blank?
Thank you for your patience and kindne
> OK. My problem is, if I use rsync+ssh with blank passphrase among servers
> to automate rsync+ssh backup procedure without password prompt, then the
> cracker will not need to send any password as well as passphrase when ssh
> login onto another server, right?
No, password and rsa/dsa authent
On Wed, Jan 02, 2002 at 03:15:20PM +0800, Patrick Hsieh wrote:
> I've read some doc. using ssh-keygen to generate key pairs, appending
> the public keys to ~/.ssh/authorized_hosts on another host to prevent
> ssh authentication prompt. Is it very risky? Chances are a cracker could
> compromise one
OK. My problem is, if I use rsync+ssh with blank passphrase among
servers to automate rsync+ssh backup procedure without password prompt,
then the cracker will not need to send any password as well as
passphrase when ssh login onto another server, right?
Is there a good way to automate rsync+ssh p
> I am sorry I could be kind of off-topic. But I want to know how to
> cross-site rsync without authentication, say ssh auth.,?
That's the best way.
> I've read some doc. using ssh-keygen to generate key pairs, appending the
> public keys to ~/.ssh/authorized_hosts on another host to prevent ss
Hello Ted,
Your mail is very informative to me.
I wonder how to define cmd to run automatically in authorized_hosts?
I thought there's nothing but pub keys in authorized_hosts file.
And, do I need ssh-agent in this case? Do I need to leave passphrase
blank?
Thank you for your patience and kindn
On Wed, Jan 02, 2002 at 03:15:20PM +0800, Patrick Hsieh wrote:
> I've read some doc. using ssh-keygen to generate key pairs, appending
> the public keys to ~/.ssh/authorized_hosts on another host to prevent
> ssh authentication prompt. Is it very risky? Chances are a cracker could
> compromise one
OK. My problem is, if I use rsync+ssh with blank passphrase among
servers to automate rsync+ssh backup procedure without password prompt,
then the cracker will not need to send any password as well as
passphrase when ssh login onto another server, right?
Is there a good way to automate rsync+ssh
> I am sorry I could be kind of off-topic. But I want to know how to
> cross-site rsync without authentication, say ssh auth.,?
That's the best way.
> I've read some doc. using ssh-keygen to generate key pairs, appending the
> public keys to ~/.ssh/authorized_hosts on another host to prevent s
Hello!
It was already sort of pointed out by other people, that your
situation can probably handlead easier by dividing it in to tasks:
- fast recovery from data damage
- prevention of changes made by hackers/virus
each of which can be better handled by individual aproaches.
While the former h
> Sigh... and I was hoping for a simple solution like cp /mnt/disk1/*
> /mnt/disk2/ :-/
This is the point at which we have one of those "Brady Bunch Moments", when
everyone stands around chuckling at what they've learned, and the credits
roll.
- Jeff
--
"And that's what it sounds like if
On Wed, 2 Jan 2002 00:55, Jason Lim wrote:
> Not really... i think of it as helping to cure the disease and helping to
> clean up the problem, not eliminating both because it is impossible to
> cure the disease completely. Unfortuantely if you work with a medium to
> large number of various equipme
On Wed, 2 Jan 2002 02:58, Jason Lim wrote:
> You are thinking resource-intensive work, which would require more than a
> basic or low level sysadmin to do. I would not trust a low level sysadmin
> to start performing restoration work on a system. At least if we catch it
> within 12 hours or 24 hour
> On Wed, 2 Jan 2002 00:44, Jason Lim wrote:
> > > > The idea being that if there is a virus, a cracker, or hardware
> > > > malfunction
> > >
> > > And if you discover this within 12 hours... Most times you won't.
> >
> > We've got file integrity checkers running on all the servers, and they
run
> > You might say "tape backup"... but keep in mind that it doesn't offer
a
> > "plug n play" solution if a server goes down. With the above method, a
> > dead server could be brought to life in a minute or so (literally)
> > rather
> > than half an hour... an hour... or more.
>
> It occours to me
Hello!
It was already sort of pointed out by other people, that your
situation can probably handlead easier by dividing it in to tasks:
- fast recovery from data damage
- prevention of changes made by hackers/virus
each of which can be better handled by individual aproaches.
While the former
> Sigh... and I was hoping for a simple solution like cp /mnt/disk1/*
> /mnt/disk2/ :-/
This is the point at which we have one of those "Brady Bunch Moments", when
everyone stands around chuckling at what they've learned, and the credits
roll.
- Jeff
--
"And that's what it sounds like i
On Wed, 2 Jan 2002 00:44, Jason Lim wrote:
> > > The idea being that if there is a virus, a cracker, or hardware
> > > malfunction
> >
> > And if you discover this within 12 hours... Most times you won't.
>
> We've got file integrity checkers running on all the servers, and they run
> very often (
On Wed, 2 Jan 2002 00:55, Jason Lim wrote:
> Not really... i think of it as helping to cure the disease and helping to
> clean up the problem, not eliminating both because it is impossible to
> cure the disease completely. Unfortuantely if you work with a medium to
> large number of various equipm
On Wed, 2 Jan 2002 02:58, Jason Lim wrote:
> You are thinking resource-intensive work, which would require more than a
> basic or low level sysadmin to do. I would not trust a low level sysadmin
> to start performing restoration work on a system. At least if we catch it
> within 12 hours or 24 hou
On Tuesday, January 1, 2002, at 05:55 PM, Jason Lim wrote:
You might say "tape backup"... but keep in mind that it doesn't offer a
"plug n play" solution if a server goes down. With the above method, a
dead server could be brought to life in a minute or so (literally)
rather
than half an hour... a
On Wed, 2 Jan 2002 00:32, Jason Lim wrote:
> > It's called RAID-1.
>
> I dunno... whenever I think of "RAID" I always think of live mirrors that
> operate constantly and not a "once in a while" mirror operation just to
> perform a backup (when talking about RAID-1). Am I mistaken in this
> thinking
> On Wed, 2 Jan 2002 00:44, Jason Lim wrote:
> > > > The idea being that if there is a virus, a cracker, or hardware
> > > > malfunction
> > >
> > > And if you discover this within 12 hours... Most times you won't.
> >
> > We've got file integrity checkers running on all the servers, and they
run
> > You might say "tape backup"... but keep in mind that it doesn't offer
a
> > "plug n play" solution if a server goes down. With the above method, a
> > dead server could be brought to life in a minute or so (literally)
> > rather
> > than half an hour... an hour... or more.
>
> It occours to me
> > Except that I've pointed out already that we're specifically NOT
looking
> > at a live RAID solution. This is a backup drive that is suppose to be
> > synced every 12 hours or 24 hours.
>
> Sorry, but I don't see any benefit to having maximum 12 hour old data
when
> you could have 0. The hardwa
> > It's called RAID-1.
>
> I dunno... whenever I think of "RAID" I always think of live mirrors that
> operate constantly
That's what they do post-sync.
> and not a "once in a while" mirror operation just to
> perform a backup (when talking about RAID-1). Am I mistaken in this
> thinking?
Th
> > Except that I've pointed out already that we're specifically NOT
looking
> > at a live RAID solution. This is a backup drive that is suppose to be
> > synced every 12 hours or 24 hours.
> >
> > The idea being that if there is a virus, a cracker, or hardware
> > malfunction
>
> And if you discov
> Except that I've pointed out already that we're specifically NOT looking
> at a live RAID solution. This is a backup drive that is suppose to be
> synced every 12 hours or 24 hours.
Sorry, but I don't see any benefit to having maximum 12 hour old data when
you could have 0. The hardware soluti
> > I know of a few hardware solutions that do something like this, but
would
> > like to do this in hardware. They claim to perform a "mirror" of one
HD to
> > another HD while the system is live and in use.
>
> It's called RAID-1.
I dunno... whenever I think of "RAID" I always think of live mirr
On Tue, 1 Jan 2002 23:40, Jason Lim wrote:
> Except that I've pointed out already that we're specifically NOT looking
> at a live RAID solution. This is a backup drive that is suppose to be
> synced every 12 hours or 24 hours.
>
> The idea being that if there is a virus, a cracker, or hardware
> ma
On Tue, 1 Jan 2002 22:49, Jason Lim wrote:
> Right now one of the things we are testing is:
> 1) mount up the "backup" hard disk
> 2) cp -a /home/* /mnt/backup/home/
> 3) umount "backup" hard disk
>
> The way we do it right now is:
> 1) a backup server with a few 60Gb HDs
> 2) use "dump" to cp the
On Wed, 2 Jan 2002 00:44, Jason Lim wrote:
> > > The idea being that if there is a virus, a cracker, or hardware
> > > malfunction
> >
> > And if you discover this within 12 hours... Most times you won't.
>
> We've got file integrity checkers running on all the servers, and they run
> very often
> > For example, http://www.arcoide.com/ . To quote the function we're
looking
> > at " the DupliDisk2 automatically switches to the remaining drive and
> > alerts the user that a drive has failed. Then, depending on the model,
the
> > user can hot-swap out the failed drive and re-mirror in the
bac
On Tuesday, January 1, 2002, at 05:55 PM, Jason Lim wrote:
>
> You might say "tape backup"... but keep in mind that it doesn't offer a
> "plug n play" solution if a server goes down. With the above method, a
> dead server could be brought to life in a minute or so (literally)
> rather
> than ha
On Wed, 2 Jan 2002 00:32, Jason Lim wrote:
> > It's called RAID-1.
>
> I dunno... whenever I think of "RAID" I always think of live mirrors that
> operate constantly and not a "once in a while" mirror operation just to
> perform a backup (when talking about RAID-1). Am I mistaken in this
> thinkin
> For example, http://www.arcoide.com/ . To quote the function we're looking
> at " the DupliDisk2 automatically switches to the remaining drive and
> alerts the user that a drive has failed. Then, depending on the model, the
> user can hot-swap out the failed drive and re-mirror in the backgroun
> > Except that I've pointed out already that we're specifically NOT
looking
> > at a live RAID solution. This is a backup drive that is suppose to be
> > synced every 12 hours or 24 hours.
>
> Sorry, but I don't see any benefit to having maximum 12 hour old data
when
> you could have 0. The hardw
> On Tue, 1 Jan 2002 07:28, Jason Lim wrote:
> > What do you think would be the best way to duplicate a HD to another
> > (similar sized) HD?
> >
> > I'm thinking that a live RAID solution isn't the best option, as (for
> > example) if crackers got in and fiddled with the system, all the HDs
woul
> > It's called RAID-1.
>
> I dunno... whenever I think of "RAID" I always think of live mirrors that
> operate constantly
That's what they do post-sync.
> and not a "once in a while" mirror operation just to
> perform a backup (when talking about RAID-1). Am I mistaken in this
> thinking?
T
> > Except that I've pointed out already that we're specifically NOT
looking
> > at a live RAID solution. This is a backup drive that is suppose to be
> > synced every 12 hours or 24 hours.
> >
> > The idea being that if there is a virus, a cracker, or hardware
> > malfunction
>
> And if you disco
> Except that I've pointed out already that we're specifically NOT looking
> at a live RAID solution. This is a backup drive that is suppose to be
> synced every 12 hours or 24 hours.
Sorry, but I don't see any benefit to having maximum 12 hour old data when
you could have 0. The hardware solut
> > I know of a few hardware solutions that do something like this, but
would
> > like to do this in hardware. They claim to perform a "mirror" of one
HD to
> > another HD while the system is live and in use.
>
> It's called RAID-1.
I dunno... whenever I think of "RAID" I always think of live mir
On Tue, 1 Jan 2002 23:40, Jason Lim wrote:
> Except that I've pointed out already that we're specifically NOT looking
> at a live RAID solution. This is a backup drive that is suppose to be
> synced every 12 hours or 24 hours.
>
> The idea being that if there is a virus, a cracker, or hardware
> m
On Tue, 1 Jan 2002 22:49, Jason Lim wrote:
> Right now one of the things we are testing is:
> 1) mount up the "backup" hard disk
> 2) cp -a /home/* /mnt/backup/home/
> 3) umount "backup" hard disk
>
> The way we do it right now is:
> 1) a backup server with a few 60Gb HDs
> 2) use "dump" to cp the
> > For example, http://www.arcoide.com/ . To quote the function we're
looking
> > at " the DupliDisk2 automatically switches to the remaining drive and
> > alerts the user that a drive has failed. Then, depending on the model,
the
> > user can hot-swap out the failed drive and re-mirror in the
ba
On Tue, 1 Jan 2002 21:06, Jeff Waugh wrote:
>
>
> > I've just done some tests on that with 33G partitions of 46G IDE drives.
> > The drives are on different IDE buses, and the CPU is an Athlon 800.
> >
> > So it seems to me that page size is probably a good buffer size to use.
>
> Cool! Nothing li
> For example, http://www.arcoide.com/ . To quote the function we're looking
> at " the DupliDisk2 automatically switches to the remaining drive and
> alerts the user that a drive has failed. Then, depending on the model, the
> user can hot-swap out the failed drive and re-mirror in the backgrou
> I've just done some tests on that with 33G partitions of 46G IDE drives.
> The drives are on different IDE buses, and the CPU is an Athlon 800.
>
> So it seems to me that page size is probably a good buffer size to use.
Cool! Nothing like Real Proper Testing to prove a point. ;)
I'm surprise
On Tue, 1 Jan 2002 07:28, Jason Lim wrote:
> What do you think would be the best way to duplicate a HD to another
> (similar sized) HD?
>
> I'm thinking that a live RAID solution isn't the best option, as (for
> example) if crackers got in and fiddled with the system, all the HDs would
> end up hav
On Tue, 1 Jan 2002 09:13, Jeff Waugh wrote:
>
>
> > What do you think would be the best way to duplicate a HD to another
> > (similar sized) HD?
>
> dd, using a large buffer size for reasonable performance
I've just done some tests on that with 33G partitions of 46G IDE drives. The
drives are o
> On Tue, 1 Jan 2002 07:28, Jason Lim wrote:
> > What do you think would be the best way to duplicate a HD to another
> > (similar sized) HD?
> >
> > I'm thinking that a live RAID solution isn't the best option, as (for
> > example) if crackers got in and fiddled with the system, all the HDs
wou
On Tue, Jan 01, 2002 at 08:39:39AM -0500, Keith Elder wrote:
> This brings up a question. How do you rsync something but keep the
> ownership and permissions the same. I am pulling data off site nightly
> and that works, but the permissions are all screwed up.
rsync -avxrP --delete $FILESYSTEMS
On Tue, 1 Jan 2002 21:06, Jeff Waugh wrote:
>
>
> > I've just done some tests on that with 33G partitions of 46G IDE drives.
> > The drives are on different IDE buses, and the CPU is an Athlon 800.
> >
> > So it seems to me that page size is probably a good buffer size to use.
>
> Cool! Nothing l
> I've just done some tests on that with 33G partitions of 46G IDE drives.
> The drives are on different IDE buses, and the CPU is an Athlon 800.
>
> So it seems to me that page size is probably a good buffer size to use.
Cool! Nothing like Real Proper Testing to prove a point. ;)
I'm surpris
On Tue, 1 Jan 2002 07:28, Jason Lim wrote:
> What do you think would be the best way to duplicate a HD to another
> (similar sized) HD?
>
> I'm thinking that a live RAID solution isn't the best option, as (for
> example) if crackers got in and fiddled with the system, all the HDs would
> end up ha
On Tue, 1 Jan 2002 09:13, Jeff Waugh wrote:
>
>
> > What do you think would be the best way to duplicate a HD to another
> > (similar sized) HD?
>
> dd, using a large buffer size for reasonable performance
I've just done some tests on that with 33G partitions of 46G IDE drives. The
drives are
On Tue, Jan 01, 2002 at 08:39:39AM -0500, Keith Elder wrote:
> This brings up a question. How do you rsync something but keep the
> ownership and permissions the same. I am pulling data off site nightly
> and that works, but the permissions are all screwed up.
rsync -avxrP --delete $FILESYSTEMS
At 8:39 Uhr -0500 01.01.2002, Keith Elder wrote:
This brings up a question. How do you rsync something but keep the
ownership and permissions the same. I am pulling data off site nightly
and that works, but the permissions are all screwed up.
I'm using
rsync -aHx --numeric-ids
and then protect th
On Wednesday, January 2, 2002, at 12:39 AM, Keith Elder wrote:
This brings up a question. How do you rsync something but keep the
ownership and permissions the same. I am pulling data off site nightly
and that works, but the permissions are all screwed up.
I use rsync -avz as root
You may want
At 8:39 Uhr -0500 01.01.2002, Keith Elder wrote:
>This brings up a question. How do you rsync something but keep the
>ownership and permissions the same. I am pulling data off site nightly
>and that works, but the permissions are all screwed up.
I'm using
rsync -aHx --numeric-ids
and then pro
gt; To: "Jason Lim" <[EMAIL PROTECTED]>,
> From: Christian Jaeger <[EMAIL PROTECTED]>
> Subject: Re: Best way to duplicate HDs
>
> Use cpbk or even better rsync (cpbk is problematic with large
> filesystems because it takes much memory to hold the tree info -
&
On Wednesday, January 2, 2002, at 12:39 AM, Keith Elder wrote:
> This brings up a question. How do you rsync something but keep the
> ownership and permissions the same. I am pulling data off site nightly
> and that works, but the permissions are all screwed up.
I use rsync -avz as root
Yo
gt; To: "Jason Lim" <[EMAIL PROTECTED]>, <[EMAIL PROTECTED]>
> From: Christian Jaeger <[EMAIL PROTECTED]>
> Subject: Re: Best way to duplicate HDs
>
> Use cpbk or even better rsync (cpbk is problematic with large
> filesystems because it takes much memory to
Use cpbk or even better rsync (cpbk is problematic with large
filesystems because it takes much memory to hold the tree info -
rsync does the same with less memory needs). They (allow to) only
copy the changed parts of the fs and keep old versions of altered
files.
chj.
Use cpbk or even better rsync (cpbk is problematic with large
filesystems because it takes much memory to hold the tree info -
rsync does the same with less memory needs). They (allow to) only
copy the changed parts of the fs and keep old versions of altered
files.
chj.
--
To UNSUBSCRIBE,
> What do you think would be the best way to duplicate a HD to another
> (similar sized) HD?
dd, using a large buffer size for reasonable performance
- Jeff
--
"Linux continues to have almost as much soul as James Brown." - Forrest
Cook, LWN
> What do you think would be the best way to duplicate a HD to another
> (similar sized) HD?
dd, using a large buffer size for reasonable performance
- Jeff
--
"Linux continues to have almost as much soul as James Brown." - Forrest
Cook, LWN
94 matches
Mail list logo