Constantin Gonzalez wrote:

Hi,

my ZFS pool for my home server is a bit unusual:

   pool: pelotillehue
state: ONLINE
scrub: scrub completed with 0 errors on Mon Aug 21 06:10:13 2006
config:

       NAME        STATE     READ WRITE CKSUM
       pelotillehue  ONLINE       0     0     0
         mirror    ONLINE       0     0     0
           c0d1s5  ONLINE       0     0     0
           c1d0s5  ONLINE       0     0     0
         raidz1    ONLINE       0     0     0
           c0d0s3  ONLINE       0     0     0
           c0d1s3  ONLINE       0     0     0
           c1d0s3  ONLINE       0     0     0
           c1d1s3  ONLINE       0     0     0
         raidz1    ONLINE       0     0     0
           c0d1s4  ONLINE       0     0     0
           c1d0s4  ONLINE       0     0     0
           c1d1s4  ONLINE       0     0     0

The reason is simple: I have 4 differently-sized disks (80, 80, 200, 250 GB.
It's a home server and so I crammed whatever I could find elswhere into that box
:) ) and my goal was to create the biggest pool possible but retaining some
level of redundancy.

The above config therefore groups the biggest slices that can be created on all
four disks into the 4-disk RAID-Z vdev, then the biggest slices that can be
created on 3 disks into the 3-disk RAID-Z, then two large slices remain which
are mirrored. It's like playing Tetris with disk slices... But the pool can
tolerate 1 broken disk and it gave me maximum storage capacity, so be it.

This means that we have one pool with 3 vdevs that access up to 3 different
sliced on the same physical disk.

Question: Does ZFS consider the underlying physical disks when load-balancing
or does it only load-balance across vdevs thereby potentially overloading
physical disks with up to 3 parallel requests per physical disk at once?

ZFS only does dynamic striping across the (top-level) vdevs.

I understand why you setup your pool that way, but ZFS really likes whole disks instead of slices.

Trying to interpret that the devices are really slices and part of other vdevds seems overly complicated for the gain achieved.

eric
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to