to DR side?
Thanks,
-Vikas
-Original Message-
From: Jason Dillaman
Sent: Thursday, November 21, 2019 10:24 AM
To: Vikas Rana
Cc: dillaman ; ceph-users
Subject: Re: [ceph-users] RBD Mirror DR Testing
On Thu, Nov 21, 2019 at 10:16 AM Vikas Rana wrote:
>
> Thanks Jason.
> We ar
27;m doing something
wrong?
Thanks,
-Vikas
-Original Message-
From: Jason Dillaman
Sent: Thursday, November 21, 2019 10:24 AM
To: Vikas Rana
Cc: dillaman ; ceph-users
Subject: Re: [ceph-users] RBD Mirror DR Testing
On Thu, Nov 21, 2019 at 10:16 AM Vikas Rana wrote:
>
> Thanks Jaso
Sent: Thursday, November 21, 2019 9:58 AM
To: Vikas Rana
Cc: ceph-users
Subject: Re: [ceph-users] RBD Mirror DR Testing
On Thu, Nov 21, 2019 at 9:56 AM Jason Dillaman wrote:
>
> On Thu, Nov 21, 2019 at 8:49 AM Vikas Rana wrote:
> >
> > Thanks Jason for such a quick resp
Vikas Rana
Cc: ceph-users
Subject: Re: [ceph-users] RBD Mirror DR Testing
On Thu, Nov 21, 2019 at 8:29 AM Vikas Rana wrote:
>
> Hi all,
>
>
>
> We have a 200TB RBD image which we are replicating using RBD mirroring.
>
> We want to test the DR copy and make sure that
Hi all,
We have a 200TB RBD image which we are replicating using RBD mirroring.
We want to test the DR copy and make sure that we have a consistent copy in
case primary site is lost.
We did it previously and promoted the DR copy which broken the DR copy from
primary and we have to resync t
rbd-mirror --cluster=cephdr"
Thanks,
-Vikas
-Original Message-
From: Jason Dillaman
Sent: Monday, April 8, 2019 9:30 AM
To: Vikas Rana
Cc: ceph-users
Subject: Re: [ceph-users] Ceph Replication not working
The log appears to be missing all the librbd log messages. The process see
Hi there,
We are trying to setup a rbd-mirror replication and after the setup,
everything looks good but images are not replicating.
Can some please please help?
Thanks,
-Vikas
root@remote:/var/log/ceph# rbd --cluster cephdr mirror pool info nfs
Mode: pool
Peers:
UUID
Hi there,
We are replicating a RBD image from Primary to DR site using RBD mirroring.
On Primary, we were using 10.2.10.
DR site is luminous and we promoted the DR copy to test the failure.
Everything checked out good.
Now we are trying to restart the replication and we did the demote
Hi there,
We are replicating a RBD image from Primary to DR site using RBD mirroring.
We were using 10.2.10.
We decided to upgrade the DR site to luminous and upgrade went fine and
mirroring status also was good.
We then promoted the DR copy to test the failure. Everything checked out
good.
The
, 2018 at 1:08 PM Vikas Rana wrote:
> To give more output. This is XFS FS.
>
> root@vtier-node1:~# rbd-nbd --read-only map testm-pool/test01
> 2018-12-12 13:04:56.674818 7f1c56e29dc0 -1 asok(0x560b19b3bdf0)
> AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to
bd0: can't read superblock
root@vtier-node1:~# mount -o ro,norecovery /dev/nbd0 /mnt
mount: /dev/nbd0: can't read superblock
root@vtier-node1:~# fdisk -l /dev/nbd0
root@vtier-node1:~#
Thanks,
-Vikas
On Wed, Dec 12, 2018 at 10:44 AM Vikas Rana wrote:
> Hi,
>
> We are using Luminou
Hi,
We are using Luminous and copying a 100TB RBD image to DR site using RBD
Mirror.
Everything seems to works fine.
The question is, can we mount the DR copy as Read-Only? We can do it on
Netapp and we are trying to figure out if somehow we can mount it RO on DR
site, then we can do backups at
Hi There,
We are replicating a 100TB RBD image to DR site. Replication works fine.
rbd --cluster cephdr mirror pool status nfs --verbose
health: OK
images: 1 total
1 replaying
dir_research:
global_id: 11e9cbb9-ce83-4e5e-a7fb-472af866ca2d
state: up+replaying
description:
Hi there,
While upgrading from jewel to luminous, all packages wereupgraded but while
adding MGR with cluster name CEPHDR, it fails. It works with default
cluster name CEPH
root@vtier-P-node1:~# sudo su - ceph-deploy
ceph-deploy@vtier-P-node1:~$ ceph-deploy --ceph-conf /etc/ceph/cephdr.conf
mgr cr
wrote:
> On Thu, Oct 4, 2018 at 10:27 AM Vikas Rana wrote:
> >
> > on Primary site, we have OSD's running on 192.168.4.x address.
> >
> > Similarly on Secondary site, we have OSD's running on 192.168.4.x
> address. 192.168.3.x is the old MON network.on both site
to
165.x.y. now primary and secondary can see each other.
Do the OSD daemon from primary and secondary have to talk to each other? we
have same non routed networks for OSD.
Thanks,
-Vikas
On Thu, Oct 4, 2018 at 10:13 AM Jason Dillaman wrote:
> On Thu, Oct 4, 2018 at 10:10 AM Vika
why its trying to connect to 192.x address
instead of 165.x.y address?
I could do ceph -s from both side and they can see each other. Only rbd
command is having issue.
Thanks,
-Vikas
On Tue, Oct 2, 2018 at 5:14 PM Jason Dillaman wrote:
> On Tue, Oct 2, 2018 at 4:47 PM Vikas Rana
Hi,
We have a CEPH 3 node cluster at primary site. We created a RBD image and
the image has about 100TB of data.
Now we installed another 3 node cluster on secondary site. We want to
replicate the image at primary site to this new cluster on secondary site.
As per documentation, we enabled journ
ue [1].
> >
> > On Wed, Sep 19, 2018 at 2:49 PM Vikas Rana wrote:
> > >
> > > Hi there,
> > >
> > > With default cluster name "ceph" I can map rbd-nbd without any issue.
> > >
> > > But for a different cluster name, i'm
Hi there,
With default cluster name "ceph" I can map rbd-nbd without any issue.
But for a different cluster name, i'm not able to map image using rbd-nbd
and getting
root@vtier-P-node1:/etc/ceph# rbd-nbd --cluster cephdr map test-pool/testvol
rbd-nbd: unknown command: --cluster
I looked at the
Hi There,
We are using a rbd mapped image as a NFS backend(XFS) and sharing to NFS
clients.
This setup has been working fine.
Now we need to replicate this image to second cluster on the campus.
For replication to work, we need exclusive-lock and journaling feature to
be enabled.
If we enable ex
21 matches
Mail list logo