yes they are.  I created one volume all shared by the webservers.  So
essentially is acting like a NAS using NFS.  All servers see the same data.

On Tue, Jan 17, 2017 at 12:26 PM, Kingsley Tart <c...@dogwind.com> wrote:

> Hi,
>
> Are these all sharing the same volume?
>
> Cheers,
> Kingsley.
>
> On Tue, 2017-01-17 at 12:19 -0500, Alex Evonosky wrote:
> > for whats its worth, I have been using CephFS shared between six
> > servers (all kernel mounted) and no issues.  Running three monitors
> > and 2 meta servers (one as backup).  This has been running great.
> >
> > On Tue, Jan 17, 2017 at 12:14 PM, Kingsley Tart <c...@dogwind.com>
> > wrote:
> >         On Tue, 2017-01-17 at 13:49 +0100, Loris Cuoghi wrote:
> >         > I think you're confusing CephFS kernel client and RBD kernel
> >         client.
> >         >
> >         > The Linux kernel contains both:
> >         >
> >         > * a module ceph.ko for accessing a CephFS
> >         > * a module rbd.ko for accessing an RBD (Rados Block Device)
> >         >
> >         > You can mount a CephFS using the kernel driver [0], or using
> >         an
> >         > userspace helper for FUSE [1].
> >         >
> >         > [0] http://docs.ceph.com/docs/master/cephfs/kernel/
> >         > [1] http://docs.ceph.com/docs/master/cephfs/fuse/
> >
> >         Hi,
> >
> >         Thanks for your reply.
> >
> >         I specifically didn't want a block device because I would like
> >         to mount
> >         the same volume on multiple machines to share the files, like
> >         you would
> >         with NFS. This is why I thought ceph-fuse would be what I
> >         needed.
> >
> >         --
> >         Cheers,
> >         Kingsley.
> >
> >         _______________________________________________
> >         ceph-users mailing list
> >         ceph-users@lists.ceph.com
> >         http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to