Thanks for the response Richard. Forgive my ignorance but the following
questions come to mind as I read your response.

I would then have to create 80 RAIDz(6+1) Volumes and the process of
creating these Volumes can be scripted. But -

1) I would then have to create 80 mount points to mount each of these
Volumes (?)

2) I would have no load balancing across mount points and I would have
to specifically direct the files to a mount point using an algorithm of
some design

3) A file landing on any one mount point would be constrained to the I/O
of the underlying disk which would represent 1/80th of the potential
available

4) Expansion of the architecture, by adding in another single disk
array, would be difficult and would probably be some form of data
migration (?). For 800TB of data that would be unacceptable.



Ted Oatway
Sun Microsystems
206.276.0769 Mobile


-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, September 05, 2006 5:50 PM
To: Oatway, Ted
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Need input on implementing a ZFS layout

Oatway, Ted wrote:
> IHAC that has 560+ LUNs that will be assigned to ZFS Pools and some 
> level of protection. The LUNs are provided by seven Sun StorageTek 
> FLX380s. Each FLX380 is configured with 20 Virtual Disks. Each Virtual

> Disk presents four Volumes/LUNs.  (4 Volumes x 20 Virtual Disks x 7
Disk 
> Arrays = 560 LUNs in total)
> 
> We want to protect against all possible scenarios including the loss
of 
> a Virtual Disk (which would take out four Volumes) and the loss of a 
> FLX380 (which would take out 80 Volumes).

This means that your maximum number of columns is N, where N is the
whole
device you could stand to lose before data availability is compromised.
In this case, that number is 7 (FLX380s).

> Today the customer has taken some number of LUNs from each of the
arrays 
> and put them into one ZFS Pool. They then create R5(15+1) RAIDz
virtual 
> disks (??) manually selecting LUNs to try and get the required level
of 
> redundancy.

Because your limit is 7, then a single parity solution like RAID-Z would
dictate that the maximum size should be RAID-Z (6+1).  Incidentally, you
will be happier with 6+1 than 15+1 for most cases.

For 2-way mirrors, then you would want to go with rotating pairs of 1/2
of a
FLX380 array.

For RAID-Z2, dual parity, you would implement RAID-Z2(5+2).  In general,
RAID-Z2 would give you the best data availability and data loss
protection
along with relatively good available space.  Caveat: I can't say when
RAID-Z2
will be available for non-Express Solaris versions, I have zero
involvement with
Solaris release schedules.

More constraints below...

> The issues are:
> 
> 1)        This is a management nightmare doing it this way

automate

> 2)        It is way too easy to make a mistake and have a RAIDz group 
> that is not configured properly

automate

NB.  this isn't as difficult to change later with ZFS than with some
other LVMs.  As long as the top-level requirements follow a consistent
design, changing the lower-level implementation can be done later
online.
Worry about the top-level vdevs which will be dictated by the number of
FLX380s as shown above.

> 3)        It would be extremely difficult to scale this type of 
> architecture if we later added a single FLX380 (6540) to the mix

The only (easy) way to scale while adding a single item, and still
retain the
same availability characteristics, is to use a mirror.

To go further down this line of thought would require the customer
to articulate how they would rank the following requirements:
        + space
        + availability
        + performance
because you will need to trade these off.
  -- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to