Hi. bash-3.00# uname -a SunOS nfs-14-2.srv 5.10 Generic_125101-03 i86pc i386 i86pc
I created first zpool (stripe of 85 disks) and did some simple stress testing - everything seems almost alright (~700MB seq reads, ~430 seqential writes). Then I destroyed pool and put SVM stripe on top the same disks utilizing the fact that zfs already put EFI and s0 represents almost entire disk. The on top on SVM volume I put zfs and simple dd files, then zpool scrub and: bash-3.00# zpool status test pool: test state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://www.sun.com/msg/ZFS-8000-8A scrub: scrub completed with 66 errors on Fri Mar 23 12:52:36 2007 config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 134 /dev/md/dsk/d100 ONLINE 0 0 134 errors: 66 data errors, use '-v' for a list bash-3.00# Disks are from Clariion CX3-40 with FC 15K disks using MPxIO (2x 4Gb links). I was changing watermarks for cache on the array and now I wonder - the array or SVM+ZFS? I'm a little bit suspicious about SVM as I can get ~80MB/s only on avarage with short burst upto ~380MB/s (no matter if it's ZFS, UFS or directly raw-device) which is much much less than ZFS (and on x4500 I can get ~2GB/s read with SVM). No errors in logs, metastat is clear. Of course fmdump -e reports errors from zfs but it's expected. So I destroyed zpool, created again, dd from /dev/zero to pool, and then read a file - and right a way I get CKSUM errors so it seems like repeatable (no watermarks fiddling this time). Later I destroyed pool and SVM device, create new pool on the same disks, the same dd and this time no CKSUM errors and much better performance. bash-3.00# metastat -p d100 d100 1 85 /dev/dsk/c6t6006016062231B003CBA35791CD9DB11d0s0 /dev/dsk/c6t6006016062231B0004D599691CD9DB11d0s0 /dev/dsk/c6t6006016062231B00BC373C571CD9DB11d0s0 /dev/dsk/c6t6006016062231B0032CCFE481CD9DB11d0s0 /dev/dsk/c6t6006016062231B0096CB093A1CD9DB11d0s0 /dev/dsk/c6t6006016062231B00D40FEB261CD9DB11d0s0 /dev/dsk/c6t6006016062231B00DC759B171CD9DB11d0s0 /dev/dsk/c6t6006016062231B00D68713071CD9DB11d0s0 /dev/dsk/c6t6006016062231B00CE8F64F71BD9DB11d0s0 /dev/dsk/c6t6006016062231B009005C0E61BD9DB11d0s0 /dev/dsk/c6t6006016062231B00CABCE6D81BD9DB11d0s0 /dev/dsk/c6t6006016062231B00F2B124C91BD9DB11d0s0 /dev/dsk/c6t6006016062231B0004FE5CBA1BD9DB11d0s0 /dev/dsk/c6t6006016062231B0034CFFBAB1BD9DB11d0s0 /dev/dsk/c6t6006016062231B00DCB4349F1BD9DB11d0s0 /dev/dsk/c6t6006016062231B0024C093921BD9DB11d0s0 /dev/dsk/c6t6006016062231B0090F561871BD9DB11d0s0 /dev/dsk/c6t6006016062231B000EB2C0751BD9DB11d0s0 /dev/dsk/c6t6006016062231B008CF5B2671BD9DB11d0s0 /dev/dsk/c6t6006016062231B002A6ED0561BD9DB11d0s0 /dev/dsk/c6t6006016062231B00441DFD4C1BD9DB11d0s0 /dev/dsk/c6t6006016062231B001CF022401BD9DB11d0s0 /dev/dsk/c6t6006016062231B00449925351BD9DB11d0s0 /dev/dsk/c6t6006016062231B00A01632271BD9DB11d0s0 /dev/dsk/c6t6006016062231B00F2344A1C1BD9DB11d0s0 /dev/dsk/c6t6006016062231B0048C112121BD9DB11d0s0 /dev/dsk/c6t6006016062231B004CE643031BD9DB11d0s0 /dev/dsk/c6t6006016062231B004E2E7FF61AD9DB11d0s0 /dev/dsk/c6t6006016062231B008CADB8EB1AD9DB11d0s0 /dev/dsk/c6t6006016062231B00C8C868DF1AD9DB11d0s0 /dev/dsk/c6t6006016062231B009CD37BCF1AD9DB11d0s0 /dev/dsk/c6t6006016062231B00E84C8BC31AD9DB11d0s0 /dev/dsk/c6t6006016062231B0086796DB71AD9DB11d0s0 /dev/dsk/c6t6006016062231B00B2098DA91AD9DB11d0s0 /dev/dsk/c6t6006016062231B00124185971AD9DB11d0s0 /dev/dsk/c6t6006016062231B003E7742871AD9DB11d0s0 /dev/dsk/c6t6006016062231B003C7EFE7A1AD9DB11d0s0 /dev/dsk/c6t6006016062231B00D48C6B711AD9DB11d0s0 /dev/dsk/c6t6006016062231B001C98CA641AD9DB11d0s0 /dev/dsk/c6t6006016062231B0054BE36541AD9DB11d0s0 /dev/dsk/c6t6006016062231B009A650C461AD9DB11d0s0 /dev/dsk/c6t6006016062231B005CBC5D3B1AD9DB11d0s0 /dev/dsk/c6t6006016062231B00201DD62F1AD9DB11d0s0 /dev/dsk/c6t6006016062231B00703483111AD9DB11d0s0 /dev/dsk/c6t6006016062231B00941573031AD9DB11d0s0 /dev/dsk/c6t6006016062231B00862C80F719D9DB11d0s0 /dev/dsk/c6t6006016062231B007E15C7ED19D9DB11d0s0 /dev/dsk/c6t6006016062231B00A07323E419D9DB11d0s0 /dev/dsk/c6t6006016062231B0096F8E0D819D9DB11d0s0 /dev/dsk/c6t6006016062231B00AAD5D3CC19D9DB11d0s0 /dev/dsk/c6t6006016062231B00008FCDC319D9DB11d0s0 /dev/dsk/c6t6006016062231B0000CDE1B719D9DB11d0s0 /dev/dsk/c6t6006016062231B00BC24C8A919D9DB11d0s0 /dev/dsk/c6t6006016062231B008834709E19D9DB11d0s0 /dev/dsk/c6t6006016062231B00BC73BF9019D9DB11d0s0 /dev/dsk/c6t6006016062231B0026B0497919D9DB11d0s0 /dev/dsk/c6t6006016062231B0012E7F56319D9DB11d0s0 /dev/dsk/c6t6006016062231B00BA53C25A19D9DB11d0s0 /dev/dsk/c6t6006016062231B0052622F5119D9DB11d0s0 /dev/dsk/c6t6006016062231B008832394619D9DB11d0s0 /dev/dsk/c6t6006016062231B006AEBE63919D9DB11d0s0 /dev/dsk/c6t6006016062231B002052892F19D9DB11d0s0 /dev/dsk/c6t6006016062231B00B833C52419D9DB11d0s0 /dev/dsk/c6t6006016062231B000A25AC1819D9DB11d0s0 /dev/dsk/c6t6006016062231B00AAB5170E19D9DB11d0s0 /dev/dsk/c6t6006016062231B00D0C5B0D018D9DB11d0s0 /dev/dsk/c6t6006016062231B00B6DEE7BD18D9DB11d0s0 /dev/dsk/c6t6006016062231B000A3458B318D9DB11d0s0 /dev/dsk/c6t6006016062231B0064A73DA618D9DB11d0s0 /dev/dsk/c6t6006016062231B000465DA9C18D9DB11d0s0 /dev/dsk/c6t6006016062231B00E45F9C8C18D9DB11d0s0 /dev/dsk/c6t6006016062231B003651838018D9DB11d0s0 /dev/dsk/c6t6006016062231B00D6E1EE7518D9DB11d0s0 /dev/dsk/c6t6006016062231B00148E596C18D9DB11d0s0 /dev/dsk/c6t6006016062231B0070BF2A6318D9DB11d0s0 /dev/dsk/c6t6006016062231B00A4D1485418D9DB11d0s0 /dev/dsk/c6t6006016062231B00E839171618D9DB11d0s0 /dev/dsk/c6t6006016062231B008666F90918D9DB11d0s0 /dev/dsk/c6t6006016062231B005A25D1FE17D9DB11d0s0 /dev/dsk/c6t6006016062231B00E2F7A4DA17D9DB11d0s0 /dev/dsk/c6t6006016062231B00DCEA12D017D9DB11d0s0 /dev/dsk/c6t6006016062231B003C3032C517D9DB11d0s0 /dev/dsk/c6t6006016062231B00C497C0AB17D9DB11d0s0 /dev/dsk/c6t6006016062231B001A70C49C17D9DB11d0s0 /dev/dsk/c6t6006016062231B000ABBBE8517D9DB11d0s0 -i 256b bash-3.00# This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss