Hi all :)

I've been wanting to make the switch from XFS over RAID5 to ZFS/RAIDZ2 for some 
time now, ever since I read about ZFS the first time. Absolutely amazing beast!

I've built my own little hobby server at home and have a boatload of disks in 
different sizes that I've been using together to build a RAID5 array on Linux 
using mdadm in two layers; first layer is JBODs pooling together smaller disks 
to match the size of the largest disks, and then on top of that, a RAID5 layer 
to join everything into one big block device.

A simplified example is this:
A 2TB disk (raw device) + a 2TB JBOD mdadm device created from 2 1TB raw 
devices + a 2TB JBOD mdadm device created from 4 500GB raw devices = 3x2 TB 
mixed (physical and logical) devices to form a final RAID5 mdadm device.

So, migrating to ZFS, I first examined the possibility to logically do the 
same, except throw away the "intermediate JBOD layer", that is, I thought it'd 
be nice if ZFS could do that part, i.e. make intermediate vdevs of smaller 
disks to use in the final vdev. As I found out, this isn't possible, though. 

The choices that I've come down to are two:
1) Use SVM to create the intermediate logical 2TB devices from smaller raw 
devices, then create a RAIDZ2 vdev using a mix of physical and logical devices 
and zpool that.
2) Divide all disks larger than 500GB into 500GB slices, then create 4 
individual RAIDZ2 vdevs directly on the raw devices, and combine them into the 
final zpool, thus eliminating the need for SVM, and maintaining portability 
between Linux and Solaris based systems.

I really prefer the second choice. I do realize this isn't best practice, but 
considering the drawbacks mentioned, I really don't mind the extra maintenance 
(it's my hobby ;) ), I can live with ZFS not being able to utilize the disk 
cache, and then there's mentioned the bad idea of UFS and ZFS living on the 
same drive, but that wouldn't be the case here anyway. All slices would be all 
ZFS.

However, what I'm concerned about, is that with this setup, there'd be 4 RAIDZ 
vdevs of which the 2TB disk would be part of all of them, the 1TB disk would be 
part of half of them, while the 500GB disks would each only be part of one of 
them.

The final question, then (sorry for the long-winded buildup ;) ), is: When ZFS 
pools together these four vdevs, will it be able to detect that these vdevs 
exist partly on the same disks and act accordingly? And by accordingly, I mean, 
if you just say "hey, there are four vdevs for me, better distribute reads and 
writes as much as possible to maximize throughput and response time", then this 
would be absolutely true in all cases where the vdevs all utilize separate 
hardware. But the exact opposite is the case here, where all four vdevs are 
(partly) on the one 2TB drive. If this approach is used here, then the 2TB 
drive would on the contrary suffer from heavy head thrashing when ZFS would be 
distributing accesses to four slices on the disk simultaneously.

In this particular case, the best approach would be to compound the four vdevs 
in a "JBOD style" rather than a "RAID style".

Does anyone have enough insight into the inner workings of ZFS to help me 
answer this question?

Thanks in advance,
Daniel :)
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to