On 08/10/10 09:12 PM, Andrew Gabriel wrote:
Phil Harman wrote:
On 10 Aug 2010, at 08:49, Ian Collins <i...@ianshome.com> wrote:
On 08/10/10 06:21 PM, Terry Hull wrote:
I am wanting to build a server with 16 - 1TB drives with 2 – 8
drive RAID Z2 arrays striped together. However, I would like the
capability of adding additional stripes of 2TB drives in the
future. Will this be a problem? I thought I read it is best to keep
the stripes the same width and was planning to do that, but I was
wondering about using drives of different sizes. These drives would
all be in a single pool.
It would work, but you run the risk of the smaller drives becoming
full and all new writes doing to the bigger vdev. So while usable,
performance would suffer.
Almost by definition, the 1TB drives are likely to be getting full
when the new drives are added (presumably because of running out of
space).
Performance can only be said to suffer relative to a new pool built
entirely with drives of the same size. Even if he added 8x 2TB drives
in a RAIDZ3 config it is hard to predict what the performance gap
will be (on the one hand: RAIDZ3 vs RAIDZ2, on the other: an empty
group vs an almost full, presumably fragmented, group).
One option would be to add 2TB drives as 5 drive raidz3 vdevs. That
way your vdevs would be approximately the same size and you would
have the optimum redundancy for the 2TB drives.
I think you meant 6, but I don't see a good reason for matching the
group sizes. I'm for RAIDZ3, but I don't see much logic in mixing
groups of 6+2 x 1TB and 3+3 x 2TB in the same pool (in one group I
appear to care most about maximising space, in the other I'm
maximising availability)
Another option - use the new 2TB drives to swap out the existing 1TB
drives.
If you can find another use for the swapped out drives, this works
well, and avoids ending up with sprawling lower capacity drives as
your pool grows in size. This is what I do at home. The freed-up
drives get used in other systems and for off-site backups. Over the
last 4 years, I've upgraded from 1/4TB, to 1/2TB, and now on 1TB drives.
I have been doing the same.
The reason I mentioned performance (and I did mean 6 drives!) is in
order to get some space on a budget I replaced one mirror in a stripe
with bigger drives. The others soon became nearly full and most of the
IO went to the bigger pair, so I lost nearly all the benefit of the
stripe. I have also grown stripes and seen similar issues and I had to
remove and replace large chunks of data to even things out.
I really think mixing vdev sizes is a bad idea.
--
Ian.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss