On Fri, Jan 22, 2021 at 3:29 PM Adam Boyhan wrote:
>
> I will have to do some looking into how that is done on Proxmox, but most
> definitely.
Thanks, appreciate it.
>
> From: "Jason Dillaman"
> To: "adamb"
> Cc: "ceph-users" , "Matt Wilder"
> Sent: Friday, J
I will have to do some looking into how that is done on Proxmox, but most
definitely.
From: "Jason Dillaman"
To: "adamb"
Cc: "ceph-users" , "Matt Wilder"
Sent: Friday, January 22, 2021 3:02:23 PM
Subject: Re: [ceph-users] Re: RBD-Mirror Snapshot Backup Image Uses
Any chance you can at
Any chance you can attempt to repeat the process on the latest master
or pacific branch clients (no need to upgrade the MONs/OSDs)?
On Fri, Jan 22, 2021 at 2:32 PM Adam Boyhan wrote:
>
> The steps are pretty straight forward.
>
> - Create rbd image of 500G on the primary
> - Enable rbd-mirror sna
The steps are pretty straight forward.
- Create rbd image of 500G on the primary
- Enable rbd-mirror snapshot on the image
- Map the image on the primary
- Format the block device with ext4
- Mount it and write out 200-300G worth of data (I am using rsync with some
local real data we have)
Any chance you could write a small reproducer test script? I can't
repeat what you are seeing and we do have test cases that really
hammer random IO on primary images, create snapshots, rinse-and-repeat
and they haven't turned up anything yet.
Thanks!
On Fri, Jan 22, 2021 at 1:50 PM Adam Boyhan
I have been doing a lot of testing.
The size of the RBD image doesn't have any effect.
I run into the issue once I actually write data to the rbd. The more data I
write out, the larger the chance of reproducing the issue.
I seem to hit the issue of missing the filesystem all together the mos
I have been trying to create two virtual test clusters to learn about the
RGW multisite setting. So far, I have set up two small Nautilus
(v.14.2.16) clusters, designated one of them as the "master zone site" and
followed every step outlined in the doc (
https://docs.ceph.com/en/nautilus/radosgw/m
Hi Dan,
it is possible that the payload reduction also solved or at least reduced a
really bad problem that looks related (beware, that's a long one):
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/FBGIJZNFG445NMYGO73PFNQL2ZB3ZF2Z/#FBGIJZNFG445NMYGO73PFNQL2ZB3ZF2Z
. Since reduc
Hello everyone,
I'm trying to add an OSD node to my current cluster. I created an lvm volume
for this node to use for OSD.
My current Ceph version is 14.2.6 and it runs on an RHEL 7 OS.
However, I got error when trying to activate the node. I'm confused with the
output. I tried to see what real
Just to follow up with an anecdote -- I had asked the question because
we had to do a planned failover of one of our MDSs.
The intervention went well and we didn't need to remove the openfiles
table objects.
We stopped the active mds.0 then the standby took over -- the rejoin
step took around 5 mi
10 matches
Mail list logo