On Wed, Feb 28, 2007 at 11:45:35AM +0100, Roch - PAE wrote:

>  > >  http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6460622

Any estimations, when we'll see a [feature] fix for U3?
Should I open a call, to perhaps rise the priority for the fix?

> The bug applies to checksum as well. Although the fix now in 
> the gate only addreses compression.
> There is a per pool limit on throuput due to checksum.
> Multiple pools may help.

Yepp. However, pooling the the disks also means to limit the I/O for a
single task to max. #disksOfPool*IOperDisk/2. So pooling would make
sense to me, if one has a lot of tasks and is able to force them to
a "dedicated" pool...
So my conclusion is, the more pools the ore aggregate "bandwitdh" (if
one is able to distribute the work properly over all disks), but the
less bandwith for a single task :((

>  > > This performance feature was fixed in Nevada last week.
>  > > Workaround is to  create multiple pools with fewer disks.
>  > 
>  > Does this make sense for mirrors only as well ?

> Yep.

OK, since I can't get out more than ~1GB/s (only one PCI-X slot left for
a 10Mbps NIC), I decided to split into 2m*12 + 2m*10 + s*2 (see below).
But I do not wanna rise the write perf limit: It already dropped to
average ~ 345 MB/s :(((((((

>  > >  http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6415647
>  > > 
>  > > is degrading a bit the perf (guesstimate of anywhere up to
>  > > 10-20%).

I would guess, even up to 45% ...

> Check out iostat 1 and you will see the '0s' : not good.

Yes - saw even 5 consecutive 0s ... :(

Here the layout (inkl. "test" results), I actually have in mind for
production:

   1.  pool for big files (sources tarballs, multimedia, iso images):
   ------------------------------------------------------------------
   zpool create -n pool1 \
   mirror c0t0d0 c1t0d0                        mirror c6t0d0 c7t0d0 \
                         mirror c4t1d0 c5t1d0                       \
   mirror c0t2d0 c1t2d0                        mirror c6t2d0 c7t2d0 \
                         mirror c4t3d0 c5t3d0                       \
   mirror c0t4d0 c1t4d0                        mirror c6t4d0 c7t4d0 \
                         mirror c4t5d0 c5t5d0                       \
   mirror c0t6d0 c1t6d0                        mirror c6t6d0 c7t6d0 \
                         mirror c4t7d0 c5t7d0                       \
   spare c4t0d0 c4t4d0                                              \

   (2x 256G) write(min/max/aver): 0 674 343.7


   2. pool for mixed stuff (homes, apps):
   --------------------------------------
   zpool create -n pool2 \
                                                                    \
   mirror c0t1d0 c1t1d0                        mirror c6t1d0 c7t1d0 \
                         mirror c4t2d0 c5t2d0                       \
   mirror c0t3d0 c1t3d0                        mirror c6t3d0 c7t3d0 \
                                                                    \
   mirror c0t5d0 c1t5d0                        mirror c6t5d0 c7t5d0 \
                         mirror c4t6d0 c5t6d0                       \
   mirror c0t7d0 c1t7d0                        mirror c6t7d0 c7t7d0 \
   spare c4t0d0 c4t4d0                                              \

   (2x 256G) write(min/max/aver): 0 600 386.0

  1. + 2. (2x 256G) write(min/max/aver): 0 1440 637.9

  1. + 2. (4x 128G) write(min/max/aver): 3.5 1268 709.5 (381+328.5)

Regards,
jel.
-- 
Otto-von-Guericke University     http://www.cs.uni-magdeburg.de/
Department of Computer Science   Geb. 29 R 027, Universitaetsplatz 2
39106 Magdeburg, Germany         Tel: +49 391 67 12768
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to