Hi all,

I've uploaded it via github - https://github.com/waipeng/nfsceph. Standard
disclaimer applies. :)

Actually #3 is a novel idea, I have not thought of it. Thinking about the
difference just off the top of my head though, comparatively, #3 will have

1) more overheads (because of the additional VM)

2) Can't grow once you reach the hard limit of 14TB, and if you have
multiple of such machines, then fragmentation becomes a problem

3) might have the risk of 14TB partition corruption wiping out all your
shares

4) not as easy as HA. Although I have not worked HA into NFSCEPH yet, it
should be doable by drdb-ing the NFS data directory, or any other
techniques that people use for redundant NFS servers.

- WP


On Fri, Nov 15, 2013 at 10:26 PM, Gautam Saxena <gsax...@i-a-inc.com> wrote:

> Yip,
>
> I went to the link. Where can the script ( nfsceph) be downloaded? How's
> the robustness and performance of this technique? (That is, is there are
> any reason to believe that it would more/less robust and/or performant than
> option #3 mentioned in the original thread?)
>
>
> On Fri, Nov 15, 2013 at 1:57 AM, YIP Wai Peng <yi...@comp.nus.edu.sg>wrote:
>
>> On Fri, Nov 15, 2013 at 12:08 AM, Gautam Saxena <gsax...@i-a-inc.com>wrote:
>>
>>>
>>> 1) nfs over rbd (
>>> http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/)
>>>
>>
>> We are now running this - basically an intermediate/gateway node that
>> mounts ceph rbd objects and exports them as NFS.
>> http://waipeng.wordpress.com/2013/11/12/nfsceph/
>>
>> - WP
>>
>
>
>
> --
> *Gautam Saxena *
> President & CEO
> Integrated Analysis Inc.
>
> Making Sense of Data.™
> Biomarker Discovery Software | Bioinformatics Services | Data Warehouse
> Consulting | Data Migration Consulting
> www.i-a-inc.com  <http://www.i-a-inc.com/>
> gsax...@i-a-inc.com
> (301) 760-3077  office
> (240) 479-4272  direct
> (301) 560-3463  fax
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to