I have a question about using mixed vdev in the same zpool and what the 
community opinion is on the matter.  Here is my setup:

I have four 1TB drives and two 500GB drives.  When I first setup ZFS I was 
under the assumption that it does not really care much on how you add devices 
to the pool and it assumes you are thinking things through.  But when I tried 
to create a pool (called group) with four 1TB disk in raidz and two 500GB disk 
in mirror configuration to the same pool ZFS complained and said if I wanted to 
do it I had to add a -f (which I assume stands for force).  So was ZFS 
attempting to stop me from doing something generally considered bad?

Some other questions I have, lets assume that this setup isn't that bad (or it 
is that bad and these questions will be why):

If one 500GB disk dies (c10dX) in the mirror and I choose not to replace it, 
would I be able to migrate the files that are on the other mirror that still 
works over to the drives in the raidz configuration assuming there is space?  
Would ZFS inform me which files are affected, like it does in other situations? 

In this configuration how does Solaris/ZFS determine which vdev to place the 
current write operations worth of data  into?

Is there any situations where data would, for some reason, not be protected 
against single disk failures? 

Would this configuration survive a two disk failure if the disk are in a 
separate vdev? 


jsm...@corax:~# zpool status group
  pool: group
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        group       ONLINE       0     0     0
        ..raidz1    ONLINE       0     0     0
        ....c7t0d0  ONLINE       0     0     0
        ....c7t1d0  ONLINE       0     0     0
        ....c8t0d0  ONLINE       0     0     0
        ....c8t1d0  ONLINE       0     0     0
          ..mirror    ONLINE       0     0     0
          ....c10d0   ONLINE       0     0     0
          ....c10d1   ONLINE       0     0     0

errors: No known data errors
jsm...@corax:~# zfs list group
NAME    USED  AVAIL  REFER  MOUNTPOINT
group  94.4K  3.12T  23.7K  /group


This isn't for a production environment in some datacenter but nevertheless I 
would like to make the data as reasonably secure as possible while maximizing 
total storage space.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to