At this point in the conversation, based on what's already been said, I
have 2 recommendations.

If you haven't already, read a lot of the architecture documentation for
ceph. This will give you a good idea what capabilities exist and don't
exist.

If after reading the architecture documentation, you are still unsure,
don't invest in Ceph. It's a great platform for many people, but it isn't
for every team or problem.

On Mon, May 21, 2018, 9:56 AM Up Safe <upands...@gmail.com> wrote:

> Active-passive sounds not what I want.
>  But maybe I misunderstand.
>
> Does rbd mirror replicate both ways?
> And how do I do it with nfs?
>
> Thanks
>
> On Mon, May 21, 2018, 17:42 Paul Emmerich <paul.emmer...@croit.io> wrote:
>
>> For active/passive and async replication with a POSIX filesystem:
>> Maybe two Ceph clusters with RBD mirror and re-exporting the RBD(s) via
>> NFS?
>>
>>
>> Paul
>>
>> 2018-05-21 16:33 GMT+02:00 Up Safe <upands...@gmail.com>:
>>
>>> I'll explain.
>>> Right now we have 2 sites (racks) with several dozens of servers at each
>>> accessing a NAS (let's call it a NAS, although it's an IBM v7000 Unified
>>> that serves the files via NFS).
>>>
>>> The biggest problem is that it works active-passive, i.e. we always
>>> access one of the storages for read/write
>>> and the other one is replicated once every few hours, so it's more for
>>> backup needs.
>>>
>>> In this setup once the power goes down in our main site - we're stuck
>>> with a bit (several hours) outdated files
>>> and we need to remount all of the servers and what not.
>>>
>>> The multi site ceph was supposed to solve this problem for us. This way
>>> we would have only local mounts, i.e.
>>> each server would only access the filesystem that is in the same site.
>>> And if one of the sited go down - no pain.
>>>
>>> The files are rather small, pdfs and xml of 50-300KB mostly.
>>> The total size is about 25 TB right now.
>>>
>>> We're a low budget company, so your advise about developing is not going
>>> to happen as we have no such skills or resources for this.
>>> Plus, I want to make this transparent for the devs and everyone - just
>>> an infrastructure replacement that will buy me all of the ceph benefits and
>>> allow the company to survive the power outages or storage crashes.
>>>
>>>
>>>
>>> On Mon, May 21, 2018 at 5:12 PM, David Turner <drakonst...@gmail.com>
>>> wrote:
>>>
>>>> Not a lot of people use object storage multi-site.  I doubt anyone is
>>>> using this like you are.  In theory it would work, but even if somebody has
>>>> this setup running, it's almost impossible to tell if it would work for
>>>> your needs and use case.  You really should try it out for yourself to see
>>>> if it works to your needs.  And if you feel so inclined, report back here
>>>> with how it worked.
>>>>
>>>> If you're asking for advice, why do you need a networked posix
>>>> filesystem?  Unless you are using proprietary software with this
>>>> requirement, it's generally lazy coding that requires a mounted filesystem
>>>> like this and you should aim towards using object storage instead without
>>>> any sort of NFS layer.  It's a little more work for the developers, but is
>>>> drastically simpler to support and manage.
>>>>
>>>> On Mon, May 21, 2018 at 10:06 AM Up Safe <upands...@gmail.com> wrote:
>>>>
>>>>> guys,
>>>>> please tell me if I'm in the right direction.
>>>>> If ceph object storage can be set up in multi site configuration,
>>>>> and I add ganesha (which to my understanding is an "adapter"
>>>>> that serves s3 objects via nfs to clients) -
>>>>> won't this work as active-active?
>>>>>
>>>>>
>>>>> Thanks
>>>>>
>>>>> On Mon, May 21, 2018 at 11:48 AM, Up Safe <upands...@gmail.com> wrote:
>>>>>
>>>>>> ok, thanks.
>>>>>> but it seems to me that having pool replicas spread over sites is a
>>>>>> bit too risky performance wise.
>>>>>> how about ganesha? will it work with cephfs and multi site setup?
>>>>>>
>>>>>> I was previously reading about rgw with ganesha and it was full of
>>>>>> limitations.
>>>>>> with cephfs - there is only one and one I can live with.
>>>>>>
>>>>>> Will it work?
>>>>>>
>>>>>>
>>>>>> On Mon, May 21, 2018 at 10:57 AM, Adrian Saul <
>>>>>> adrian.s...@tpgtelecom.com.au> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> We run CephFS in a limited fashion in a stretched cluster of about
>>>>>>> 40km with redundant 10G fibre between sites – link latency is in the 
>>>>>>> order
>>>>>>> of 1-2ms.  Performance is reasonable for our usage but is noticeably 
>>>>>>> slower
>>>>>>> than comparable local ceph based RBD shares.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Essentially we just setup the ceph pools behind cephFS to have
>>>>>>> replicas on each site.  To export it we are simply using Linux kernel 
>>>>>>> NFS
>>>>>>> and it gets exported from 4 hosts that act as CephFS clients.  Those 4
>>>>>>> hosts are then setup in an DNS record that resolves to all 4 IPs, and we
>>>>>>> then use automount to do automatic mounting and host failover on the NFS
>>>>>>> clients.  Automount takes care of finding the quickest and available NFS
>>>>>>> server.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> I stress this is a limited setup that we use for some fairly light
>>>>>>> duty, but we are looking to move things like user home directories onto
>>>>>>> this.  YMMV.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On
>>>>>>> Behalf Of *Up Safe
>>>>>>> *Sent:* Monday, 21 May 2018 5:36 PM
>>>>>>> *To:* David Turner <drakonst...@gmail.com>
>>>>>>> *Cc:* ceph-users <ceph-users@lists.ceph.com>
>>>>>>> *Subject:* Re: [ceph-users] multi site with cephfs
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> can you be a bit more specific?
>>>>>>>
>>>>>>> I need to understand whether this is doable at all.
>>>>>>>
>>>>>>> Other options would be using ganesha, but I understand it's very
>>>>>>> limited on NFS;
>>>>>>>
>>>>>>> or start looking at gluster.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Basically, I need the multi site option, i.e. active-active
>>>>>>> read-write.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Wed, May 16, 2018 at 5:50 PM, David Turner <drakonst...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> Object storage multi-site is very specific to using object storage.
>>>>>>> It uses the RGW API's to sync s3 uploads between each site.  For CephFS 
>>>>>>> you
>>>>>>> might be able to do a sync of the rados pools, but I don't think that's
>>>>>>> actually a thing yet.  RBD mirror is also a layer on top of things to 
>>>>>>> sync
>>>>>>> between sites.  Basically I think you need to do something on top of the
>>>>>>> Filesystem as opposed to within Ceph  to sync it between sites.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Wed, May 16, 2018 at 9:51 AM Up Safe <upands...@gmail.com> wrote:
>>>>>>>
>>>>>>> But this is not the question here.
>>>>>>>
>>>>>>> The question is whether I can configure multi site for CephFS.
>>>>>>>
>>>>>>> Will I be able to do so by following the guide to set up the multi
>>>>>>> site for object storage?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Wed, May 16, 2018, 16:45 John Hearns <hear...@googlemail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> The answer given at the seminar yesterday was that a practical limit
>>>>>>> was around 60km.
>>>>>>>
>>>>>>> I don't think 100km is that much longer.  I defer to the experts
>>>>>>> here.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 16 May 2018 at 15:24, Up Safe <upands...@gmail.com> wrote:
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> About a 100 km.
>>>>>>>
>>>>>>> I have a 2-4ms latency between them.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Leon
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Wed, May 16, 2018, 16:13 John Hearns <hear...@googlemail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> Leon,
>>>>>>>
>>>>>>> I was at a Lenovo/SuSE seminar yesterday and asked a similar
>>>>>>> question regarding separated sites.
>>>>>>>
>>>>>>> How far apart are these two geographical locations?   It does matter.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 16 May 2018 at 15:07, Up Safe <upands...@gmail.com> wrote:
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I'm trying to build a multi site setup.
>>>>>>>
>>>>>>> But the only guides I've found on the net were about building it
>>>>>>> with object storage or rbd.
>>>>>>>
>>>>>>> What I need is cephfs.
>>>>>>>
>>>>>>> I.e. I need to have 2 synced file storages at 2 geographical
>>>>>>> locations.
>>>>>>>
>>>>>>> Is this possible?
>>>>>>>
>>>>>>> Also, if I understand correctly - cephfs is just a component on top
>>>>>>> of the object storage.
>>>>>>>
>>>>>>> Following this logic - it should be possible, right?
>>>>>>>
>>>>>>> Or am I totally off here?
>>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>> Leon
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> ceph-users mailing list
>>>>>>> ceph-users@lists.ceph.com
>>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> ceph-users mailing list
>>>>>>> ceph-users@lists.ceph.com
>>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> ceph-users mailing list
>>>>>>> ceph-users@lists.ceph.com
>>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> ceph-users mailing list
>>>>>>> ceph-users@lists.ceph.com
>>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>>
>>>>>>>
>>>>>>> Confidentiality: This email and any attachments are confidential and
>>>>>>> may be subject to copyright, legal or some other professional privilege.
>>>>>>> They are intended solely for the attention and use of the named
>>>>>>> addressee(s). They may only be copied, distributed or disclosed with the
>>>>>>> consent of the copyright owner. If you have received this email by 
>>>>>>> mistake
>>>>>>> or by breach of the confidentiality clause, please notify the sender
>>>>>>> immediately by return email and delete or destroy all copies of the 
>>>>>>> email.
>>>>>>> Any confidentiality, privilege or copyright is not waived or lost 
>>>>>>> because
>>>>>>> this email has been sent to you by mistake.
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>
>>
>> --
>> --
>> Paul Emmerich
>>
>> Looking for help with your Ceph cluster? Contact us at https://croit.io
>>
>> croit GmbH
>> Freseniusstr. 31h
>> 81247 München
>> www.croit.io
>> Tel: +49 89 1896585 90
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to