Gfs2 it self can be mounted as nfs share on the client side you dont even
need to run nfs underneath.

??????

I have a collegue who told the same thing, but showed to him that's not true

If you have have a link for this, i can appreciate

Thanks
Emmanuel


2013/6/27 Igor Cicimov <icici...@gmail.com>

> Gfs2 it self can be mounted as nfs share on the client side you dont even
> need to run nfs underneath.
> On 27/06/2013 7:18 AM, "Joel Wirāmu Pauling" <j...@aenertia.net> wrote:
>
>> I successfully run nfsv4 and drbd in clustered mode.
>>
>> The main thing to do wrt config files for nfs is pin down port numbers
>> to specific (rather than dynamic ones) at startup for the rpc suite.
>> And also switch to UDP rather than transport (solves session issues
>> during failover) - your clients all need to explicitly ensure they are
>> mounting with udp options.
>>
>> Also you need to have the rpc socket file handles on a clustered
>> filesystem somewhere mounted on both nodes (I use GFS2 for this
>> purpose as it's easier).
>>
>> I have heard great things about ceph instead of drbd but haven't tried
>> it myself yet.
>>
>> On 27 June 2013 09:06, Stan Hoeppner <s...@hardwarefreak.com> wrote:
>> > On 6/26/2013 2:54 PM, David Parker wrote:
>> >
>> >> As you both pointed out, it
>> >> would be easier and safer to use a clustered filesystem instead of NFS
>> for
>> >> this project.  I'll check out GlusterFS, it looks like a great option.
>> >
>> > It may be worth clarification to note GlusterFS is not a cluster
>> > filesystem.  It is a distributed filesystem.  There is a significant
>> > difference between clustered and distributed.
>> >
>> > A distributed filesystem such as Gluster is applicable to your needs as
>> > you can add/remove clients in an ad hoc manner without issue.  A cluster
>> > filesystem is probably not suitable, because you simply can't connect
>> > new nodes in a willy nilly fashion.  None of OCFS, GFS, GPFS, CXFS, etc
>> > handle this very well, if at all.  Cluster filesystems require hardware
>> > fencing between nodes.  One doesn't setup hardware fencing willy nilly.
>> >
>> > --
>> > Stan
>> >
>> >
>> > --
>> > To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
>> > with a subject of "unsubscribe". Trouble? Contact
>> listmas...@lists.debian.org
>> > Archive: http://lists.debian.org/51cb57d6.20...@hardwarefreak.com
>> >
>>
>>
>> --
>> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
>> with a subject of "unsubscribe". Trouble? Contact
>> listmas...@lists.debian.org
>> Archive:
>> http://lists.debian.org/CAKiAkGQZ0K0oZpy=W0G6D8KFgPZapsL90=EPvBygFRb=one...@mail.gmail.com
>>
>>


-- 
esta es mi vida e me la vivo hasta que dios quiera

Reply via email to