> I'm willing to accept slower writes with compression enabled, par for > the course. Local writes, even with compression enabled, can still > exceed 500MB/sec, with moderate to high CPU usage. > These problems seem to have manifested after snv_128, and seemingly > only affect ZFS receive speeds. Local pool performance is still very > fast.
Now we're getting somewhere. ;-) You've tested the source disk (result: fast.) You've tested the destination disk without zfs receive (result: fast.) Now the only two ingredients left are: Ssh performance, or zfs receive performance. So, to conclusively identify and prove and measure that zfs receive is the problem, how about this: zfs send somefilessytem | ssh somehost 'cat > /dev/null' If that goes slow, then ssh is the culprit. If that goes fast ... and then you change to "zfs receive" and that goes slow ... Now you've scientifically shown that zfs receive is slow. _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss