Hello zfs-discuss,

Server: x4500, 2x Opetron 285 (dual-core), 16GB RAM, 48x500GB

filebench/randomread script, filesize=256GB

2 disks for system, 2 disks as hot-spares, atime set to off for a
pool, cache_bshift set to 8K (2^13), recordsize untouched (default).

pool: 4x raid-z (5 disks) + 4x raid-z (6 disks) means that one pool
was created wit 4 raid-z1 groups each with 5 disks and another 4
raid-z1 groups each with 6 disks.


1. pool: 4x raid-z (5 disks) + 4x raid-z (6 disks)
   (36 disks of usable space)

   a. nthreads = 1     ~60 ops
   b. nthreads = 4     ~250 ops
   c. nthreads = 8     ~520 ops
   d. nthreads = 128   ~1340 ops

      1340/8 = 167 ops
   
2. pool: 2x raid-z2 (10 disks) + 2x raid-z2 (12 disks)
   (36 disks of usable space)

   a. nthreads = 1      ~50 ops
   b. nthreads = 4      ~190 ops
   c. nthreads = 8      ~360 ops
   d. nthreads = 128    ~720 ops

      720/4 = 180 ops
      
3. pool: 2x raid-z2 (22 disks)
   (40 disks of usable space)
   
   a. nthreads = 1      ~40 ops
   b. nthreads = 4      ~120 ops
   c. nthreads = 8      ~160 ops
   d. nthreads = 128    ~345 ops

      345/2 = 172 ops
   
4. pool: 4x raid-z (11 disks)
   (40 disks of usable space)
   
   a. nthreads = 1      ~50 ops
   b. nthreads = 4      ~190 ops
   c. nthreads = 8      ~340 ops
   d. nthreads = 128    ~710 ops

      710/4 = 177 ops
   
5. pool: 4x raid-z2 (11 disks)
   (36 disks of usable space)

   a. nthreads = 1       ~55 ops
   b. nthreads = 4       ~200 ops
   c. nthreads = 8       ~350 ops
   d. nthreads = 128     ~760 ops

      760/4 = 190 ops

6. pool:22x mirror (2 disks)
   (22 disks of usable space)

   a. nthreads = 1       ~75 ops
   b. nthreads = 4       ~320 ops
   c. nthreads = 8       ~670 ops
   d. nthreads = 128     ~3900 ops

      3900/22 = 177 ops
      3900/44 = 88 ops  (it's a read test after all)



Well, random reads really tends to give about 1-2x # of IOs of a
single disk in a raid-z[12] group. For some workloads it's really bad.
For some workloads I would definitely prefer much better random read
performance in terms of IO/s and trade in write performance.
Maybe something like raid-y[12] which are more like classical
raid-[56]? That way user would have a choice - good read performance
however excellent write performance, or just the opposite.

Right now RAID-Z1 and RAID-Z2 is just plainly horrible in terms of #
IOs for random-read environments when caching ratio is marginal.
This is especially painful with x4500.




      


-- 
Best regards,
 Robert                          mailto:[EMAIL PROTECTED]
                                     http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to