Thomas Maier-Komor wrote:
> First start the receive side, then the sender side:
>
> receiver> mbuffer -s 128k -m 200M -I sender:8000 | zfs receive filesystem
>
> sender> zfs send pool/filesystem | mbuffer -s 128k -m 200M -O receiver:8000
>
> Of course, you should adjust the hostnames accordingly, and set the
> mbuffer buffer size to a value that fits your needs (option -m).
>
> BTW: I've just released a new version of mbuffer which defaults to TCP
> buffer size of 1M, which can be adjusted with option --tcpbuffer.
>   

In my experimentation (using my own buffer program), it's the receive 
side buffering you need. The size of the buffer needs to be large enough 
to hold 5 seconds worth of data. How much data/second you get will 
depend on which part of your system is the limiting factor. In my case, 
with 7200 RPM drives not striped and a 1Gbit network, the limiting 
factor is the drives, which can easily deliver 50MBytes/sec, so a buffer 
size of 250MBytes works well. With striped disks or 10,000 or 15,000 RPM 
disks, the 1Gbit network might become the limiting factor (at around 
100MByte/sec).

If the speed of the disks and network are miles apart either way around 
(e.g. if I had used had 100Mbit or 10Gbit ethernet), then the buffer has 
a considerably reduced effectiveness -- it's most important when the 
disks and network are delivering same order of magnitude performance.

Note, this may not help with incrementals much, as there are long 
periods when zfs-send is only sending data at 1Mbyte/sec, and no network 
buffering will make a scrap of difference to that. (I presume this is 
when it's looking for changed data to send and is skipping over stuff 
that hasn't changed?)

-- 
Andrew
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to