Larry Lui wrote:
> My definition of a "unified namespace" is to provide the end user with 1 
> logical mount point which would be comprised of an aggregate of all the 
> thumpers.  A very simple example, 6 thumpers (17TB each).  I want the 
> end user to see one mount point that is 102TB large.
>
> I agree with you that there is some ugliness here.  Thats why I'm hoping 
> to get some better suggestions on how to accomplish this.  I looked at 
> Lustre but it seems to be linux only.
>   

WIP, see http://wiki.lustre.org/index.php?title=Lustre_OSS/MDS_with_ZFS_DMU
But I'm not convinced this is what you are after, either.

There are a number of people exporting ZFS+iSCSI to hosts running ZFS and
subsequently exporting NFS.  While maybe not the best possible performance,
it should work ok.  I'd suggest a migration plan to expand which will 
determine
your logical volume size.  AFAIK, there is little performance 
characterization
that has been done for this, and there are zillions of possible 
permutations.  Of
course, backups will be challenging until ADM arrives.
    http://opensolaris.org/os/project/adm
 -- richard

> Thanks for your input.
>
> Richard Elling wrote:
>   
>> Larry Lui wrote:
>>     
>>> Hello,
>>> I have a situation here at the office I would like some advice on.
>>>
>>> I have 6 Sun Fire x4550(Thumper) that I want to aggregate the storage 
>>> and create a unified namespace for my client machines.  My plan was to 
>>> export the zpools from each thumper as an iscsi target to a Solaris 
>>> machine and create a RAIDZ zpool from the iscsi targets.  I think this 
>>> is what they call RAID plaiding(RAID on RAID).  This Solaris frontend 
>>> machine would then share out this zpool via NFS or CIFS.
>>>   
>>>       
>> What is your operating definition of "unified namespace."  In my mind,
>> I've been providing a unified namespace for 20+ years -- it is a process
>> rather than a product.  For example, here at Sun, no matter where I login,
>> I get my home directory.
>>
>>     
>>> My question is what is the best solution for this?  The question i'm 
>>> facing is how to add additional thumpers since you cannot expand a 
>>> RAIDZ array.
>>>   
>>>       
>> Don't think of a thumper as a whole disk.  Then expanding a raidz2 
>> (preferred)
>> can be accomplished quite easily.  For example, something like 6 
>> thumpers, each
>> providing N  iSCSI volumes.  You can add another thumper, move the data 
>> around
>> and end up with 7 thumpers providing data -- online, no downtime.  This 
>> will take
>> a good long while to do because you are moving TBytes between thumpers, but
>> it can be done.
>>
>> IMHO, there is some ugliness here.  You might see if pNFS, QFS, or Lustre
>> would better suit the requirements at the "unified namespace" level.
>> -- richard
>>
>>     
>
>   

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to