Hello Robert,
Thursday, August 24, 2006, 4:44:26 PM, you wrote:
RM> Hello Robert,
RM> Thursday, August 24, 2006, 4:25:16 PM, you wrote:
RM>> Hello Roch,
RM>> Thursday, August 24, 2006, 3:37:34 PM, you wrote:
R>>> So this is the interesting data , right ?
R>>> 1. 3510, RAID-10 using 24 di
Hello Robert,
Thursday, August 24, 2006, 4:25:16 PM, you wrote:
RM> Hello Roch,
RM> Thursday, August 24, 2006, 3:37:34 PM, you wrote:
R>> So this is the interesting data , right ?
R>> 1. 3510, RAID-10 using 24 disks from two enclosures, random
R>> optimization, 32KB stripe width, writ
Hello Roch,
Thursday, August 24, 2006, 3:37:34 PM, you wrote:
R> So this is the interesting data , right ?
R> 1. 3510, RAID-10 using 24 disks from two enclosures, random
R> optimization, 32KB stripe width, write-back, one LUN
R> 1.1 filebench/varmail for 60s
R> a. ZFS on top of
So this is the interesting data , right ?
1. 3510, RAID-10 using 24 disks from two enclosures, random
optimization, 32KB stripe width, write-back, one LUN
1.1 filebench/varmail for 60s
a. ZFS on top of LUN, atime=off
IO Summary: 490054 ops 8101.6 ops/s, (1246/1247 r/
Hello zfs-discuss,
Server is v440, Solaris 10U2 + patches. Each test repeated at least two times
and two results posted. Server connected with dual-ported FC card with
MPxIO using FC-AL (DAS).
1. 3510, RAID-10 using 24 disks from two enclosures, random
optimization, 32KB stripe width, write-b