Dear Jason
  I shall explain my setup first
  The DR centre is 300 Kms apart from the site
  Site-A   - OSD 0 - 1 TB  Mon - 10.236.248.XX /24
  Site-B   - OSD 0  - 1 TB  Mon - 10.236.228.XX/27  - RBD-Mirror deamon running
  All ports are open and no firewall..Connectivity is there between

  My initial setup I used a common L2 connectivity between both the sites..The
same error as now
  I have changed the configuration to L3 still I get the same

root@meghdootctr:~# rbd mirror image status volumes/meghdoot
meghdoot:
  global_id:   52d9e812-75fe-4a54-8e19-0897d9204af9
  state:       up+syncing
  description: bootstrapping, IMAGE_COPY/COPY_OBJECT 0%
  last_update: 2019-08-26 17:00:21
Please do specify where I do the mistake or whats wrong with my configuration

Site-A   Site-B
 [global]
fsid = 494971c1-75e7-4866-b9fb-e98cb8171473
mon_initial_members = clouddr
mon_host = 10.236.247.XX
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public network = 10.236.247.0/24
osd pool default size = 1
mon_allow_pool_delete= true
rbd default features = 125      [global]
fsid = 494971c1-75e7-4866-b9fb-e98cb8171473
mon_initial_members = meghdootctr
mon_host = 10.236.228.XX
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public network = 10.236.228.64/27
osd pool default size = 1
mon_allow_pool_delete= true
rbd default features = 125

Regards
V.A.Prabha

On August 20, 2019 at 7:00 PM Jason Dillaman <jdill...@redhat.com> wrote:

>  On Tue, Aug 20, 2019 at 9:23 AM V A Prabha < prab...@cdac.in
> <mailto:prab...@cdac.in> > wrote:
>    > >    I too face the same problem as mentioned by Sat
> >      All the images created at the primary site are in the state : down+
> > unknown
> >      Hence in the secondary site the images is 0 % up + syncing all time
> > ....No progress
> >      The only error log that is continuously hitting is
> >      2019-08-20 18:04:38.556908 7f7d4cba3700 -1
> > rbd::mirror::InstanceWatcher: C_NotifyInstanceRequest: 0x7f7d4000f650
> > finish: resending after timeout
> >  > 
>  This sounds like your rbd-mirror daemon cannot contact all OSDs. Double check
> your network connectivity and firewall to ensure that rbd-mirror daemon can
> connect to *both* Ceph clusters (local and remote).
> 
>    > > 
> > 
> >      The setup is as follows
> >       One OSD created in the primary site with cluster name [site-a] and one
> > OSD created in the secondary site with cluster name [site-b] both have the
> > same ceph.conf file
> >       RBD mirror is installed at the secondary site [ which is 300kms away
> > from the primary site]
> >       We are trying to merge this with our Cloud but the cinder volume fails
> > syncing everytime
> >      Primary Site Output
> >        root@clouddr:/etc/ceph# rbd mirror pool status volumesnew --verbose
> >        health: WARNING
> >        images: 4 total
> >        4 unknown
> >        boss123:
> >         global_id:   7285ed6d-46f4-4345-b597-d24911a110f8
> >         state:       down+unknown
> >         description: status not found
> >         last_update:
> >         new123:
> >         global_id:   e9f2dd7e-b0ac-4138-bce5-318b40e9119e
> >         state:       down+unknown
> >         description: status not found
> >         last_update:
> > 
> >    root@clouddr:/etc/ceph# rbd mirror pool info volumesnew
> >    Mode: pool
> >    Peers: none
> >    root@clouddr:/etc/ceph# rbd mirror pool status volumesnew
> >    health: WARNING
> >    images: 4 total
> >        4 unknown
> > 
> >    Secondary Site
> >    root@meghdootctr:~# rbd mirror image status volumesnew/boss123
> >    boss123:
> >      global_id:   7285ed6d-46f4-4345-b597-d24911a110f8
> >      state:       up+syncing
> >      description: bootstrapping, IMAGE_COPY/COPY_OBJECT 0%
> >      last_update: 2019-08-20 17:24:18
> >    Please help me to identify where do I miss something
> > 
> >    Regards
> >    V.A.Prabha
> >    [150th Anniversary Mahatma Gandhi]
> > 
> >   
> > ------------------------------------------------------------------------------------------------------------
> >    [ C-DAC is on Social-Media too. Kindly follow us at:
> >    Facebook: https://www.facebook.com/CDACINDIA
> > <https://www.facebook.com/CDACINDIA> & Twitter: @cdacindia ]
> > 
> >    This e-mail is for the sole use of the intended recipient(s) and may
> >    contain confidential and privileged information. If you are not the
> >    intended recipient, please contact the sender by reply e-mail and destroy
> >    all copies and the original message. Any unauthorized review, use,
> >    disclosure, dissemination, forwarding, printing or copying of this email
> >    is strictly prohibited and appropriate legal action will be taken.
> > 
> >   
> > ------------------------------------------------------------------------------------------------------------
> >    _______________________________________________
> >    ceph-users mailing list
> >    ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> >    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
> >  > 
> 
>  --
>  Jason
> 

------------------------------------------------------------------------------------------------------------
[ C-DAC is on Social-Media too. Kindly follow us at:
Facebook: https://www.facebook.com/CDACINDIA & Twitter: @cdacindia ]

This e-mail is for the sole use of the intended recipient(s) and may
contain confidential and privileged information. If you are not the
intended recipient, please contact the sender by reply e-mail and destroy
all copies and the original message. Any unauthorized review, use,
disclosure, dissemination, forwarding, printing or copying of this email
is strictly prohibited and appropriate legal action will be taken.
------------------------------------------------------------------------------------------------------------

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to