Vincent Fox wrote:
> So the problem in the zfs send/receive thing, is what if your network 
> glitches out during the transfers?

zfs doesn't know.  It depends on how the pipe tolerates breakage.

> We have these once a day due to some as-yet-undiagnosed switch problem, a 
> chop-out of 50 seconds or so which is enough to trip all our IPMP setups and 
> enough to abort SSH transfers in progress.

See in.mpathd(1m) for details on the algorithm and how to tune the
FAILURE_DETECTION_TIME.

> Just saying you need error-checking to account for these.  The transfers in 
> my testing seemed fairly slow I was doing a full send and receive not 
> incremental, for some 400 gigs and it took over 24 hours at which time I lost 
> connection and gave up on the idea.  Once you were just down to incrementals 
> it probably wouldn't be so bad.

The next time this happens, could you collect iostat(1m) data from the
sending host?  I prefer something like "iostat -xnzT d 10"  I'd like to
see if we are actually read bound on the sending host, which should be the
normal case.
  -- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to