On Sat, Dec 12, 2009 at 11:39 AM, Brent Jones <br...@servuhome.net> wrote:
> On Sat, Dec 12, 2009 at 7:55 AM, Bob Friesenhahn
> <bfrie...@simple.dallas.tx.us> wrote:
>> On Sat, 12 Dec 2009, Brent Jones wrote:
>>
>>> I've noticed some extreme performance penalties simply by using snv_128
>>
>> Does the 'zpool scrub' rate seem similar to before?  Do you notice any read
>> performance problems?  What happens if you send to /dev/null rather than via
>> ssh?
>>
>> Bob
>> --
>> Bob Friesenhahn
>> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
>> GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
>>
>
> Scrubs on both systems seem to take about the same amoutn of time (16
> hours, on a 48TB pool, with about 20TB used)
>
> I'll test to dev/null tonight
>
> --
> Brent Jones
> br...@servuhome.net
>

I tested send performance to /dev/null, and I sent a 500GB filesystem
in just a few minutes.

The two servers are linked over GigE fiber (between two cities)

Iperf output:

[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-60.0 sec  2.06 GBytes    295 Mbits/sec
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-60.0 sec  2.38 GBytes    341 Mbits/sec

Usually a bit faster, but some other stuff goes over that pipe.


Though looking at network traffic between these two hosts during the
send, I see a lot of network traffic (about 100-150Mbit usually)
during the send. So theres traffic, but a 100MB send has taken over 10
minutes and still not complete. But given 100Mbit/sec, it should take
about 10 seconds roughly, not 10 minutes.
There is a little bit of disk activity, maybe a MB/sec on average, and
about 30 iops.
So it seems the hosts are exchanging a lot of data about the snapshot,
but not actually replicating any data for a very long time.
SSH CPU usage is minimal, just a few percent (arcfour, but tried
others, no difference)

Odd behavior to be sure, and looks very familiar to what snapshot
replication did back in build 101, before they made significant speed
improvements to snapshot replication. Wonder if this is a major
regression, due to changes in newer ZFS versions, maybe to accomodate
de-dupe?

Sadly, I can't roll back, since I already upgraded my pool, but I may
try upgrading to 129, but my IPS doesn't seem to recognize the newer
version yet.


-- 
Brent Jones
br...@servuhome.net
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to