Tim Cook wrote:


On Mon, Nov 16, 2009 at 12:09 PM, Bob Friesenhahn <bfrie...@simple.dallas.tx.us <mailto:bfrie...@simple.dallas.tx.us>> wrote:

    On Sun, 15 Nov 2009, Tim Cook wrote:


        Once again I question why you're wasting your time with
raid-z. You might as well just stripe across all the drives. You're taking a performance penalty for a setup that
        essentially has 0 redundancy.  You lose a 500gb drive, you
        lose everything.


    Why do you say that this user will lose everything?  The two
    concatenated/striped devices on the local system are no different
    than if they were concatenated on SAN array and made available as
    one LUN. If one of those two drives fails, then it would have the
    same effect as if one larger drive failed.

    Bob


Can I blame it on too many beers? I was thinking losing half of one drive, rather than an entire vdev would just cause "weirdness" in the pool, rather than a clean failure. I suppose without experimentation there's no way to really no, in theory though, zfs should be able to handle it.

--Tim

Back to the original question: the "concat using SVM" method works OK if the disk you have are all integer multiples of each other (that is, this worked because he had 2 500GB drives to make a 1TB drive out of). It certainly seems the best method - both for performance and maximum disk space - that I can think of. However, it won't work well in other cases: i.e. a couple of 250GB drives, and a couple of 1.5TB drives.

In cases of serious mis-match between the drive sizes, especially when there's not a real good way to concat to get a metadrive big enough to match others, I'd recommend going for multiple zpools, and slicing up the bigger drives to allow for RAIDZ-ing with the smaller ones "natively".

E.g.

let's say you have 3 250GB drives, and 3 1.5TB drives. You could partition the 1.5TB drives into 250GB and 1.25TB partitions, and then RAIDZ the 3 250GB drives together, plus the 250GB partitions as one zpool, then the 1.25TB partitions as another zpool.

You'll have some problems with contending I/O if you try to write to both zpools at once, but it's the best way I can think of to maximize space and at the same time maximize performance for single-pool I/O.

I think it would be a serious performance mistake to combine the two pools as vdevs in a single pool, though it's perfectly possible.

I.e.
(preferred)
zpool create smalltank raidz c0t0d0 c0t1d0 c0t2d0 c1t0d0s0 c1t1d0s0 c1t2d0s0
zpool create largetank raidz c1t0d0s1 c1t1d0s1 c1t2d0s1

instead of

zpool create supertank raidz c0t0d0 c0t1d0 c0t2d0 c1t0d0s0 c1t1d0s0 c1t2d0s0 raidz c1t0d0s1 c1t1d0s1 c1t2d0s1



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to