Hi Albert, On Tue, 2006-12-05 at 14:16 +0100, Albert Shih wrote: > It's possible to configure the server, the high level raid array, and the > pool of my old array raid to do : > > 1/ When the server read/write he do from high level raid > 2/ The server make a copie of all data from high level raid to the > pool of my old array «when he have the time». But I want this > automatics. I don't want this by using something like rsync.
Using zfs send/recv, you can have one system running ZFS send copies to other systems running ZFS. It's analogous to using rsync, but should be a bit quicker. I don't know of an existing automated way to do this send/recv only when the sending zpool isn't busy, for some given definition of "busy".. (you're the 2nd person I've heard from in recent days that's asked for this - Theo asked a similar question at http://blogs.sun.com/timf/entry/zfs_automatic_snapshot_service_logging#comments I wonder is it a useful RFE for the ZFS automatic snapshot service ? ) Here's what I'm thinking: if you know what times the system is likely to be idle, you can use a cron job to send/receive the data between systems -- would this be sufficient ? Remember that you can send/recv incremental snapshots as well, so every 10 minutes, you could take a snapshot of your data, and decide whether to send/recv that (which would reduce the amount of IO you need to do) If the system is "busy", you just remember which incremental snapshot you last sent, and record that somewhere. As soon as the system is idle, take another snapshot, and do an incremental send of the difference between that and your recorded snapshot. This probably isn't elegant, but it would do the job I think. > What I want to do is make a NFS server with the new high level raid array > with primary data. But I want also using my old-low-level raid array to > make backup (in case I'm lost my high-level raid array) and only backup. Sounds like you really want i/o throttling of send/recv operations as against "normal" pool operations - I don't know enough to suggest how this could be implemented (except via brutal pstop/prun hacks on the "zfs send" process whenever your pool exceeds some given IO threshold) cheers, tim -- Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops http://blogs.sun.com/timf _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss