Ok, I'm going to reply to my own question here. After a few hours of thinking, I believe I know what is going on.
I am seeing the initial high network throughput as the 4GB of RAM in the server fills up with data. In fact, in this case, I am bound by the speed of the source drive, which tops out at about 40 MB/s -- just what I am seeing as the copy starts. Eventually, the network speed settles down to the write speed of the local pool. Copying files locally (on and off the pool) shows that the sustained write speeds are, in fact, about 17-20 MB/s. So, this brings up a new question, are these speeds typical? For reference, my pool is built from 6 1TB drives configured as RAIDZ2 driven by an ICH9(R) configured in AHCI mode. I am aware that RAIDZ2 performance will always be less than the speed of individual disks, but this is a little bit more than I was expecting. Individually, these drives benchmark around 60-70 MB/s, so I am looking at a fairly substantial penalty for the reliability of RAIDZ2. I'll CC this message to the CIFS and Networking lists to prevent anyone else from waiting time writing a reply, as the appropriate place for this thread is now confirmed to be zfs-discuss. -g. -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss