On Thu, Oct 4, 2018 at 11:15 AM Vikas Rana <vikasra...@gmail.com> wrote:
>
> Bummer.
>
> Our OSD is on 10G private network and MON is on 1G public network. I believe 
> this is reference architecture mentioned everywhere to separate MON and OSD.
>
> I believe the requirement for rbd-mirror for the secondary site MON to reach 
> the private OSD IPs on primary was never mentioned anywhere or may be i 
> missed it.
>
> Looks like if rbd-mirror needs to be used, we have to use 1 network for both 
> MON and the OSDs. No private addresseing  :-)
>
> Thanks a lot for your help. I won't have got this information without your 
> help.

Sorry about that, I thought it was documented but it turns out that
was only in the Red Hat block mirroring documentation. I've opened a
PR to explicitly state that the "rbd-mirror" daemon requires full
cluster connectivity [1].

> -Vikas
>
>
> On Thu, Oct 4, 2018 at 10:37 AM Jason Dillaman <jdill...@redhat.com> wrote:
>>
>> On Thu, Oct 4, 2018 at 10:27 AM Vikas Rana <vikasra...@gmail.com> wrote:
>> >
>> > on Primary site, we have OSD's running on 192.168.4.x address.
>> >
>> > Similarly on Secondary site, we have OSD's running on 192.168.4.x address. 
>> > 192.168.3.x is the old MON network.on both site which was non route-able.
>> > So we renamed mon on primary site to 165.x.x and mon on secondary site to 
>> > 165.x.y. now primary and secondary can see each other.
>> >
>> >
>> > Do the OSD daemon from primary and secondary have to talk to each other? 
>> > we have same non routed networks for OSD.
>>
>> The secondary site needs to be able to communicate with all MON and
>> OSD daemons in the primary site.
>>
>> > Thanks,
>> > -Vikas
>> >
>> > On Thu, Oct 4, 2018 at 10:13 AM Jason Dillaman <jdill...@redhat.com> wrote:
>> >>
>> >> On Thu, Oct 4, 2018 at 10:10 AM Vikas Rana <vikasra...@gmail.com> wrote:
>> >> >
>> >> > Thanks Jason for great suggestions.
>> >> >
>> >> > but somehow rbd mirror status not working from secondary to primary. 
>> >> > Here;s the status from both sides. cluster name is ceph on primary side 
>> >> > and cephdr on remote site. mirrordr is the user on DR side and 
>> >> > mirrorprod is on primary prod side.
>> >> >
>> >> > # rbd mirror pool info nfs
>> >> > Mode: image
>> >> > Peers:
>> >> >   UUID                                 NAME   CLIENT
>> >> >   3ccd7a67-2343-44bf-960b-02d9b1258371 cephdr client.mirrordr.
>> >> >
>> >> > rbd --cluster cephdr mirror pool info nfs
>> >> > Mode: image
>> >> > Peers:
>> >> >   UUID                                 NAME CLIENT
>> >> >   e6b9ba05-48de-462c-ad5f-0b51d0ee733f ceph client.mirrorprod
>> >> >
>> >> >
>> >> > From primary site, when i query the remote site, its looks good.
>> >> > # rbd --cluster cephdr --id mirrordr mirror pool status nfs
>> >> > health: OK
>> >> > images: 0 total
>> >> >
>> >> > but when i query from secondary site to primary side, I'm getting this 
>> >> > error
>> >> > # rbd  --cluster ceph --id mirrorprod mirror pool status nfs
>> >> > 2018-10-03 10:21:06.645903 7f27a44ed700  0 -- 165.x.x.202:0/1310074448 
>> >> > >> 192.168.3.21:6804/3835 pipe(0x55ed47daf480 sd=4 :0 s=1 pgs=0 cs=0 
>> >> > l=1 c=0x55ed47db0740).fault
>> >> >
>> >> >
>> >> > We were using 192.168.3.x for MON network before we renamed it to use 
>> >> > 165 address since its routeable. why its trying to connect to 192.x 
>> >> > address instead of 165.x.y address?
>> >>
>> >> Are your OSDs on that 192.168.3.x subnet? What daemons are running on
>> >> 192.168.3.21?
>> >>
>> >> > I could do ceph -s from both side and they can see each other. Only rbd 
>> >> > command is having issue.
>> >> >
>> >> > Thanks,
>> >> > -Vikas
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > On Tue, Oct 2, 2018 at 5:14 PM Jason Dillaman <jdill...@redhat.com> 
>> >> > wrote:
>> >> >>
>> >> >> On Tue, Oct 2, 2018 at 4:47 PM Vikas Rana <vikasra...@gmail.com> wrote:
>> >> >> >
>> >> >> > Hi,
>> >> >> >
>> >> >> > We have a CEPH 3 node cluster at primary site. We created a RBD 
>> >> >> > image and the image has about 100TB of data.
>> >> >> >
>> >> >> > Now we installed another 3 node cluster on secondary site. We want 
>> >> >> > to replicate the image at primary site to this new cluster on 
>> >> >> > secondary site.
>> >> >> >
>> >> >> > As per documentation, we enabled journaling on primary site. We 
>> >> >> > followed all the procedure and peering looks good but the image is 
>> >> >> > not copying.
>> >> >> > The status is always showing down.
>> >> >>
>> >> >> Do you have an "rbd-mirror" daemon running on the secondary site? Are
>> >> >> you running "rbd mirror pool status" against the primary site or the
>> >> >> secondary site? The mirroring status is only available on the sites
>> >> >> running "rbd-mirror" daemon (the "down" means that the cluster you are
>> >> >> connected to doesn't have the daemon running).
>> >> >>
>> >> >> > So my question is, is it possible to replicate a image which already 
>> >> >> > have some data before enabling journalling?
>> >> >>
>> >> >> Indeed -- it will perform a full image sync to the secondary site.
>> >> >>
>> >> >> > We are using the image mirroring instead of pool mirroring. Do we 
>> >> >> > need to create the RBD image on secondary site? As per 
>> >> >> > documentation, its not required.
>> >> >>
>> >> >> The only difference between the two modes is whether or not you need
>> >> >> to run "rbd mirror image enable" or not.
>> >> >>Nit: same comment -- can we drop the ```max_data_area_mb``` parameter?
>> >> >> > Is there any other option to copy the image to the remote site?
>> >> >>
>> >> >> No other procedure should be required.
>> >> >>
>> >> >> > Thanks,
>> >> >> > -Vikas
>> >> >> > _______________________________________________
>> >> >> > ceph-users mailing list
>> >> >> > ceph-users@lists.ceph.com
>> >> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >> >>
>> >> >>
>> >> >>
>> >> >> --
>> >> >> Jason
>> >>
>> >>
>> >>
>> >> --
>> >> Jason
>>
>>
>>
>> --
>> Jason


[1] https://github.com/ceph/ceph/pull/24433

-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to