On Sep 29, 2009, at 2:03 AM, Bernd Nies wrote:
Hi,
We have a Sun Storage 7410 with the latest release (which is based
upon opensolaris). The system uses a hybrid storage pool (23 1TB
SATA disks in RAIDZ2 and 1 18GB SSD as log device). The ZFS volumes
are exported with NFSv3 over TCP. NFS mount options are:
rw,bg,vers=3,proto=tcp,hard,intr,rsize=32768,wsize=32768,forcedirectio
We compare that system with our Netapp FAS 3140 and notice a high
performance decrease when multiple hosts write many small files in
parrallel (e.g. CVS checkout).
Doing that on one single host, the write speed is quite similar on
both systems:
Netapp FAS 3140:
be...@linuxhost:~/tmp> time cvs -Q checkout myBigProject
real 0m32.914s
user 0m1.568s
sys 0m3.060s
Sun Storage 7410:
be...@linuxhost:/share/nightlybuild/tmp> time cvs -Q checkout
myBigProject
real 0m34.049s
user 0m1.592s
sys 0m3.184s
Doing the same operation on 5 different hosts on the same NFS share
in different directories we notice a performance decrease which is
proportional to the number of writing hosts (5x slower) while the
same operation on Netapp FAS 3140 is less than 2x slower:
Netapp FAS 3140:
be...@linuxhost:~/tmp/1> time cvs -Q checkout myBigProject
real 0m58.120s
user 0m1.452s
sys 0m2.976s
Sun Storage 7410:
be...@linuxhost:/share/nightlybuild/tmp/1> time cvs -Q checkout
myBigProject
real 4m32.747s
user 0m2.296s
sys 0m4.224s
Often we run into timeouts (CVS timeout is set to 60 minutes) when
building software during a nightly build process which makes this
storage unusable because the NFS writes are slowed down drastically.
This happens also when we run VMware machines on an ESX server on a
NFS pool and Oracle databases on NFS. Netapp and Oracle recommend
using NFS as central storage but we wanted a less expensive system
because it is used only for development and testing and not highly
critical production data. But the performance slowdown when more
than one writing NFS client is involved is too bad.
What might here the bottleneck? Any ideas? The zfs log device? Are
there more than one zfs log device required for parallel
performance? As many as NFS clients?
bingo! One should suffice.
BTW, not fair comparing a machine with an NVRAM cache to one
without... add an SSD for the log to even things out.
-- richard
Best regards,
Bernd
nfsserver# zpool status
pool: pool-0
state: ONLINE
scrub: resilver completed after 0h0m with 0 errors on Wed Sep 23
04:27:21 2009
config:
NAME STATE READ
WRITE CKSUM
pool-0 ONLINE
0 0 0
raidz2 ONLINE
0 0 0
c3t5000C50014ED4D01d0 ONLINE
0 0 0
c3t5000C50014F4EC09d0 ONLINE
0 0 0
c3t5000C50014F4EE46d0 ONLINE
0 0 0
c3t5000C50014F4F50Ed0 ONLINE
0 0 0
c3t5000C50014F4FB64d0 ONLINE
0 0 0
c3t5000C50014F50A7Cd0 ONLINE
0 0 0
c3t5000C50014F50F57d0 ONLINE
0 0 0
c3t5000C50014F52A59d0 ONLINE
0 0 0
c3t5000C50014F52D83d0 ONLINE
0 0 0
c3t5000C50014F52E0Cd0 ONLINE
0 0 0
c3t5000C50014F52F9Bd0 ONLINE
0 0 0
raidz2 ONLINE
0 0 0
c3t5000C50014F54EB1d0 ONLINE
0 0 0 254K resilvered
c3t5000C50014F54FC9d0 ONLINE
0 0 0 264K resilvered
c3t5000C50014F512E3d0 ONLINE
0 0 0 264K resilvered
c3t5000C50014F515C9d0 ONLINE
0 0 0 262K resilvered
c3t5000C50014F549EAd0 ONLINE
0 0 0 262K resilvered
c3t5000C50014F553EBd0 ONLINE
0 0 0 262K resilvered
c3t5000C50014F5072Cd0 ONLINE
0 0 0 279K resilvered
c3t5000C50014F5192Bd0 ONLINE
0 0 0 4.60M resilvered
c3t5000C50014F5494Bd0 ONLINE
0 0 0 258K resilvered
c3t5000C50014F5500Bd0 ONLINE
0 0 0 264K resilvered
c3t5000C50014F51865d0 ONLINE
0 0 0 248K resilvered
logs
c3tATASTECZEUSIOPS018GBYTESSTM0000D905Cd0 ONLINE
0 0 0
spares
c3t5000C50014F53925d0 AVAIL
errors: No known data errors
pool: system
state: ONLINE
status: The pool is formatted using an older on-disk format. The
pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done,
the
pool will no longer be accessible on older software versions.
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
system ONLINE 0 0 0
mirror ONLINE 0 0 0
c0t1d0s0 ONLINE 0 0 0
c0t0d0s0 ONLINE 0 0 0
errors: No known data errors
nfsserver# echo | format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <DEFAULT cyl 60798 alt 2 hd 255 sec 63>
/p...@1,0/pci10de,c...@5,1/d...@0,0
1. c0t1d0 <DEFAULT cyl 60798 alt 2 hd 255 sec 63>
/p...@1,0/pci10de,c...@5,1/d...@1,0
2. c3t5000C50014ED4D01d0 <ATA-SEAGATE ST31000N-SU0E-931.51GB>
/scsi_vhci/d...@g5000c50014ed4d01
3. c3t5000C50014F4EC09d0 <ATA-SEAGATE ST31000N-SU0E-931.51GB>
/scsi_vhci/d...@g5000c50014f4ec09
4. c3t5000C50014F4EE46d0 <ATA-SEAGATE ST31000N-SU0E-931.51GB>
/scsi_vhci/d...@g5000c50014f4ee46
5. c3t5000C50014F4F50Ed0 <ATA-SEAGATE ST31000N-SU0E-931.51GB>
/scsi_vhci/d...@g5000c50014f4f50e
6. c3t5000C50014F4FB64d0 <ATA-SEAGATE ST31000N-SU0E-931.51GB>
/scsi_vhci/d...@g5000c50014f4fb64
7. c3t5000C50014F50A7Cd0 <ATA-SEAGATE ST31000N-SU0E-931.51GB>
/scsi_vhci/d...@g5000c50014f50a7c
8. c3t5000C50014F50F57d0 <ATA-SEAGATE ST31000N-SU0E-931.51GB>
/scsi_vhci/d...@g5000c50014f50f57
9. c3t5000C50014F52A59d0 <ATA-SEAGATE ST31000N-SU0E-931.51GB>
/scsi_vhci/d...@g5000c50014f52a59
10. c3t5000C50014F52D83d0 <ATA-SEAGATE ST31000N-SU0E-931.51GB>
/scsi_vhci/d...@g5000c50014f52d83
11. c3t5000C50014F52E0Cd0 <ATA-SEAGATE ST31000N-SU0E-931.51GB>
/scsi_vhci/d...@g5000c50014f52e0c
12. c3t5000C50014F52F9Bd0 <ATA-SEAGATE ST31000N-SU0E-931.51GB>
/scsi_vhci/d...@g5000c50014f52f9b
13. c3t5000C50014F54EB1d0 <ATA-SEAGATE ST31000N-SU0E-931.51GB>
/scsi_vhci/d...@g5000c50014f54eb1
14. c3t5000C50014F54FC9d0 <ATA-SEAGATE ST31000N-SU0E-931.51GB>
/scsi_vhci/d...@g5000c50014f54fc9
15. c3t5000C50014F512E3d0 <ATA-SEAGATE ST31000N-SU0E-931.51GB>
/scsi_vhci/d...@g5000c50014f512e3
16. c3t5000C50014F515C9d0 <ATA-SEAGATE ST31000N-SU0E-931.51GB>
/scsi_vhci/d...@g5000c50014f515c9
17. c3t5000C50014F549EAd0 <ATA-SEAGATE ST31000N-SU0E-931.51GB>
/scsi_vhci/d...@g5000c50014f549ea
18. c3t5000C50014F553EBd0 <ATA-SEAGATE ST31000N-SU0E-931.51GB>
/scsi_vhci/d...@g5000c50014f553eb
19. c3t5000C50014F5072Cd0 <ATA-SEAGATE ST31000N-SU0E-931.51GB>
/scsi_vhci/d...@g5000c50014f5072c
20. c3t5000C50014F5192Bd0 <ATA-SEAGATE ST31000N-SU0E-931.51GB>
/scsi_vhci/d...@g5000c50014f5192b
21. c3t5000C50014F5494Bd0 <ATA-SEAGATE ST31000N-SU0E-931.51GB>
/scsi_vhci/d...@g5000c50014f5494b
22. c3t5000C50014F5500Bd0 <ATA-SEAGATE ST31000N-SU0E-931.51GB>
/scsi_vhci/d...@g5000c50014f5500b
23. c3t5000C50014F51865d0 <ATA-SEAGATE ST31000N-SU0E-931.51GB>
/scsi_vhci/d...@g5000c50014f51865
24. c3t5000C50014F53925d0 <ATA-SEAGATE ST31000N-SU0E-931.51GB>
/scsi_vhci/d...@g5000c50014f53925
25. c3tATASTECZEUSIOPS018GBYTESSTM0000D905Cd0 <ATA-STEC
ZeusIOPS-0430-17.00GB>
/scsi_vhci/d...@gatasteczeusiops018gbytesstm0000d905c
Specify disk (enter its number): Specify disk (enter its number):
--
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss