Hi all,

i want to replace a bunch of Apple Xserves with Xraids and HFS+ (brr) by Sun 
x4200 with SAS-Jbods and ZFS. The application will be the Helios UB+ fileserver 
suite.
I installed the latest Solaris 10 on a x4200 with 8gig of ram and two Sun SAS 
controllers, attached two sas-jbods with 8 SATA-HDDs each und created a zfs 
pool as a raid 10 by doing something like the following:

[i]zpool create zfs_raid10_16_disks mirror c3t0d0 c4t0d0 mirror c3t1d0 c4t1d0 
mirror c3t2d0 c4t2d0 mirror c3t3d0 c4t3d0 mirror c3t4d0 c4t4d0 mirror c3t5d0 
c4t5d0 mirror c3t6d0 c4t6d0 mirror c3t7d0 c4t7d0[/i]

the i set "noatime" and ran the following filebench tests:



[i]
[EMAIL PROTECTED] # ./filebench
filebench> load fileserver
12746: 7.445: FileServer Version 1.14 2005/06/21 21:18:52 personality 
successfully loaded
12746: 7.445: Usage: set $dir=<dir>
12746: 7.445:        set $filesize=<size>    defaults to 131072
12746: 7.445:        set $nfiles=<value>     defaults to 1000
12746: 7.445:        set $nthreads=<value>   defaults to 100
12746: 7.445:        set $meaniosize=<value> defaults to 16384
12746: 7.445:        set $meandirwidth=<size> defaults to 20
12746: 7.445: (sets mean dir width and dir depth is calculated as log (width, 
nfiles)
12746: 7.445:
12746: 7.445:        run runtime (e.g. run 60)
12746: 7.445: syntax error, token expected on line 43
filebench> set $dir=/zfs_raid10_16_disks/test
filebench> run 60
12746: 47.198: Fileset bigfileset: 1000 files, avg dir = 20.0, avg depth = 2.3, 
mbytes=122
12746: 47.218: Removed any existing fileset bigfileset in 1 seconds
12746: 47.218: Creating fileset bigfileset...
12746: 60.222: Preallocated 1000 of 1000 of fileset bigfileset in 14 seconds
12746: 60.222: Creating/pre-allocating files
12746: 60.222: Starting 1 filereader instances
12751: 61.228: Starting 100 filereaderthread threads
12746: 64.228: Running...
12746: 65.238: Run took 1 seconds...
12746: 65.266: Per-Operation Breakdown
statfile1                 988ops/s   0.0mb/s      0.0ms/op       22us/op-cpu
deletefile1               991ops/s   0.0mb/s      0.0ms/op       48us/op-cpu
closefile2                997ops/s   0.0mb/s      0.0ms/op        4us/op-cpu
readfile1                 997ops/s 139.8mb/s      0.2ms/op      175us/op-cpu
openfile2                 997ops/s   0.0mb/s      0.0ms/op       28us/op-cpu
closefile1               1081ops/s   0.0mb/s      0.0ms/op        6us/op-cpu
appendfilerand1           982ops/s  14.9mb/s      0.1ms/op       91us/op-cpu
openfile1                 982ops/s   0.0mb/s      0.0ms/op       27us/op-cpu

12746: 65.266:
IO Summary:       8088 ops 8017.4 ops/s, (997/982 r/w) 155.6mb/s,    508us 
cpu/op,   0.2ms
12746: 65.266: Shutting down processes
filebench>[/i]

I expected to see some higher numbers really...
a simple "time mkfile 16g lala" gave me something like 280Mb/s.

Would anyone comment on this?

TIA,
Tom
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to