This is something that I've run into as well across various installs very similar to the one described (PE2950 backed by an MD1000). I find that overall the write performance across NFS is absolutely horrible on 2008.11 and 2009.06. Worse, I use iSCSI under 2008.11 and it's just fine with near wire speeds in most cases, but under 2009.06 I can't even format a VMFS volume from ESX without hitting a timeout. Throughput over the iSCSI connection is mostly around 64K/s with 1 operation per second.

I'm downgrading my new server back to 2008.11 until I can find a way to ensure decent performance since this is really a showstopper. But in the meantime I've completely given up on NFS as a primary data store - strictly used for templates and iso images and stuff which I copy up via scp since it's literally 10 times faster than over NFS.

I have a 2008.11 OpenSolaris server with an MD1000 using 7 mirror vdevs. The networking is 4 GbE split into two trunked connections.

Locally, I get 460 MB/s write and 1 GB/s read so raw disk performance is not a problem. When I use iSCSI I get wire speed in both directions on the GbE from ESX and other clients. However when I use NFS, write performance is limited to about 2 MB/s. Read performance is close to wire speed.

I'm using a pretty vanilla configuration, using only atime=off and sharenfs=anon=0.

I've looked at various tuning guides for NFS with and without ZFS but I haven't found anything that seems to address this type of issue.

Anyone have some tuning tips for this issue? Other than adding an SSD as a write log or disabling the ZIL.. (although from James' experience this too seems to have a limited impact).

Cheers,

Erik
On 3 juil. 09, at 08:39, James Lever wrote:

While this was running, I was looking at the output of zpool iostat fastdata 10 to see how it was going and was surprised to see the seemingly low IOPS.

jam...@scalzi:~$ zpool iostat fastdata 10
              capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
fastdata    10.0G  2.02T      0    312    268  3.89M
fastdata    10.0G  2.02T      0    818      0  3.20M
fastdata    10.0G  2.02T      0    811      0  3.17M
fastdata    10.0G  2.02T      0    860      0  3.27M

Strangely, when I added a second SSD as a second slog, it made no difference to the write operations.

I'm not sure where to go from here, these results are appalling (about 3x the time of the old system with 8x 10kRPM spindles) even with two Enterprise SSDs as separate log devices.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to