On Thu, 21 Apr 2011 18:41:25 EDT erik quanstrom <quans...@labs.coraid.com>  
wrote:
> > IIRC companies such as Panasas separate file names and other
> > metadata from file storage. One way to get a single FS
> > namespace that spans multiple disks or nodes for increasing
> > data redundancy, file size beyond the largest disk size,
> > throughput (and yes, complexity).
> 
> that certainly does seem like the hard way to do things.
> why should the structure of the data depend on where it's
> located?  certainly ken's fs doesn't change the format of
> the worm if you concatinate several devices for the worm
> or use just one.

?

It all boils down to having to cope with individual units'
limits and failures.

If a file needs to be larger than the capacity of the largest
disk, you stripe data across multiple disks.  To handle disk
failures you use mirroring or parity across multiple disks.
To increase performance beyond what a single controller can
do, you add multiple disk controllers.  When you want higher
capacity and throughput than is possible on a single node, you
use a set of nodes, and stripe data across them. To handle a
single node failure you mirror data across multiple nodes. To
support increased lookups & metadata operations, you separate
metadata storage  & nodes from file storage & nodes as lookups
+ metadata have a different access pattern from file data
access. To handle more concurrent access you add more net
bandwidth and balance it across nodes.

>From an adminstrative point of view a single global namespace
is much easier to manage. One should be able to add or replace
individual units (disks, nodes, network capacity) quickly as
and when needed without taking the FS down (to reduce
administrative costs and to avoid any downtime). Then you have
to worry about backups (on and offsite). In such a complex
system, the concept of a single `volume' doesn't work well.
In any case, users don't care about what data layout is used
as long as the system can grow to fill their needs.

Reply via email to