nbd unmap $pool/$image
Doesn't it work for you?
--
Mykola Golub
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ure I understand what problems you have with it?
The new format 'import/export' can also be used for copying all
snapshots.
And there is `rbd migration` [1] which among other things may be used
as "deep move" between the pools. May be this is what you want?
[1] https://docs.c
e helpful for fixing the bug. It is can be reported right to
the tracker.
What version are you running BTW?
--
Mykola Golub
> Zitat von Vikas Rana :
>
> > Hi Friends,
> >
> >
> >
> > We have a very weird issue with rbd-mirror replication. As per the comman
=455084955], mirror_position=[object_number=396351, tag_tid=4,
> entry_tid=455084955], entries_behind_master=0
> last_update: 2021-02-19 15:36:30
And I suppose, after creating and replaying a snapshot, you still see
files missing on the secondary after mounting it?
--
Mykola Golub
to detect the
moment if the issue happens again, and report it to the tracker
attaching the rbd-mirror logs.
--
Mykola Golub
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
>> messages to peers in rename_prepare_witness, and waits for
> >> acknowledgements before writing EUpdate events to its journal
> >> - The peer(s) write EPeerUpdate(OP_PREPARE) events to their journals
> >> during prepare, and EPee
t;MAX AVAIL" in this example is (N - 1) * 10% + 1 * 50%,
instead of (N - 1) * 90% + 1 * 50%, which whould you expect for "free".
To make "MAX AVAIL" match exactly "free", you have to have a perfectly
balanced cluster. Look at `ceph osd df` output to see how well d
it helps with the slow
ops (it might make sense to restar mons if some look like get
stuck). You can apply the config option on the fly (without restarting
the osds, e.g with injectargs), but when re-enabling back you will
have to restart the osds to avoid
qemu-img create -f rbd
rbd:my_pool_metadata/my_data 1T
```
--
Mykola Golub
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Could you please provide the full rbd-nbd log? If it is too large for
the attachment then may be via some public url?
--
Mykola Golub
On Tue, May 18, 2021 at 03:04:51PM +0800, Zhi Zhang wrote:
> Hi guys,
>
> We are recently testing rbd-nbd using ceph N version. After map rbd
> ima
On Wed, May 19, 2021 at 11:32:04AM +0800, Zhi Zhang wrote:
> On Wed, May 19, 2021 at 11:19 AM Zhi Zhang
> wrote:
>
> >
> > On Tue, May 18, 2021 at 10:58 PM Mykola Golub
> > wrote:
> > >
> > > Could you please provide the full rbd-nbd log? If it is t
ased, i.e. "bluestore block
device" safety header but not the ending "\n"
(0x0a) byte which is also a part of this header. We advised to restore
the full header (23 bytes) just for safety but
22 bytes would be enough too.
--
Mykola Golub
rom_snap}_{to_snap}.log
--log-to-stderr=false
Hope it will not use too much space and you will be able to get a log
for a getting stuck case.
Then please provide the log for review somehow. Also, notice the time
when you interrupt the hanging
export-diff.
--
Mykola Golub
__
_purge_schedule"}]:
> dispatch
>
> 9/10/23 10:02:24 AM[INF]from='mgr.252911336 ' entity='mgr.ceph-25'
> cmd=[{"prefix":"config
> rm","who":"mgr","name":"mgr/rbd_support/ceph-25/mirror_snapshot_schedule"}]:
n time" messages are piling up.
>
> Is there a way to allow (deep) scrub in this situation?
ceph config set osd osd_scrub_during_recovery true
--
Mykola Golub
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
; look and approve?
I don't have much experience with krbd tests but the failures indeed
look like known issues, not related to pacific update.
rbd - LGTM.
Thanks.
--
Mykola Golub
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ing `rados` and `ceph osd map` you could
ssh to every osd node and use e.g. find+stat to get necessary data for
all files (objects) with $block_name_prefix in their name.
--
Mykola Golub
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscr
so it would be nice to check this
first.
I also see you have rbd_support module enabled. It would be good to
have it temporary disabled during this experiment too.
--
Mykola Golub
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send a
18 matches
Mail list logo