On Wed, Dec 16, 2009 at 8:19 AM, Brandon High <bh...@freaks.com> wrote: > On Wed, Dec 16, 2009 at 8:05 AM, Bob Friesenhahn > <bfrie...@simple.dallas.tx.us> wrote: >> In his case 'zfs send' to /dev/null was still quite fast and the network >> was also quite fast (when tested with benchmark software). The implication >> is that ssh network transfer performace may have dropped with the update. > > zfs send appears to be fast still, but receive is slow. > > I tried a pipe from the send to the receive, as well as using mbuffer > with a 100mb buffer, both wrote at ~ 12 MB/s.
I did a little bit of testing today. I'm sending from a snv_129 system, using a 2.31GB filesystem to test. The sender has 8GB of DDR2-800 memory and a Athlon X2 4850e cpu. It's using 8x WD Green 5400rpm 1TB drives on a PCI-X controller, in a raidz2. The receiver has 2GB of DDR2-533 memory and a Atom 330 cpu. It's using 2 Hitachi 7200rpm 1TB drives in a non-redundant zpool. I destroyed and recreated the zpool on the receiver between tests. Doing a send to /dev/null completes in under a second, since the entire dataset can be cached. Sending across the network to a snv_118 system via netcat, then to /dev/null took 45.496s and 40.384s. Sending across the network to a snv_118 system via netcat, then to /tank/test took 45.496s and 40.384s. Sending across the network via netcat and recv'ing on a snv_118 system took 101s and 97s. I rebooted the receiver to a snv_128a BE and did the same tests. Sending across the network to a snv_128a system via netcat, then to /dev/null took 43.067s. Sending across the network via netcat and recv'ing on a snv_128a system took 98s with dedup=off. Sending across the network via netcat and recv'ing on a snv_128a system took 121s with dedup=on. Sending across the network via netcat and recv'ing on a snv_128a system took 134s with dedup=verify It looks like the receive times didn't change much for a small dataset. The change from fletcher4 to sha256 when enabling dedup is probably responsible for the slowdown. I suspect that the dataset is too small to run into the performance problems I was seeing. I'll try later with a larger filesystem and see what the numbers look like. -B -- Brandon High : bh...@freaks.com _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss