[ceph-users] Re: [Ceph-qa] Using rbd-nbd tool in Ceph development cluster

2020-11-16 Thread Mykola Golub
nbd unmap $pool/$image Doesn't it work for you? -- Mykola Golub ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: rbd move between pools

2021-02-18 Thread Mykola Golub
ure I understand what problems you have with it? The new format 'import/export' can also be used for copying all snapshots. And there is `rbd migration` [1] which among other things may be used as "deep move" between the pools. May be this is what you want? [1] https://docs.c

[ceph-users] Re: Data Missing with RBD-Mirror

2021-02-18 Thread Mykola Golub
e helpful for fixing the bug. It is can be reported right to the tracker. What version are you running BTW? -- Mykola Golub > Zitat von Vikas Rana : > > > Hi Friends, > > > > > > > > We have a very weird issue with rbd-mirror replication. As per the comman

[ceph-users] Re: Data Missing with RBD-Mirror

2021-02-22 Thread Mykola Golub
=455084955], mirror_position=[object_number=396351, tag_tid=4, > entry_tid=455084955], entries_behind_master=0 > last_update: 2021-02-19 15:36:30 And I suppose, after creating and replaying a snapshot, you still see files missing on the secondary after mounting it? -- Mykola Golub

[ceph-users] Re: Data Missing with RBD-Mirror

2021-02-22 Thread Mykola Golub
to detect the moment if the issue happens again, and report it to the tracker attaching the rbd-mirror logs. -- Mykola Golub ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Question about per MDS journals

2021-02-25 Thread Mykola Golub
>> messages to peers in rename_prepare_witness, and waits for > >> acknowledgements before writing EUpdate events to its journal > >> - The peer(s) write EPeerUpdate(OP_PREPARE) events to their journals > >> during prepare, and EPee

[ceph-users] Re: Erasure coded calculation

2021-02-25 Thread Mykola Golub
t;MAX AVAIL" in this example is (N - 1) * 10% + 1 * 50%, instead of (N - 1) * 90% + 1 * 50%, which whould you expect for "free". To make "MAX AVAIL" match exactly "free", you have to have a perfectly balanced cluster. Look at `ceph osd df` output to see how well d

[ceph-users] Re: MON slow ops and growing MON store

2021-02-26 Thread Mykola Golub
it helps with the slow ops (it might make sense to restar mons if some look like get stuck). You can apply the config option on the fly (without restarting the osds, e.g with injectargs), but when re-enabling back you will have to restart the osds to avoid

[ceph-users] Re: Erasure-coded Block Device Image Creation With qemu-img - Help

2021-03-17 Thread Mykola Golub
qemu-img create -f rbd rbd:my_pool_metadata/my_data 1T ``` -- Mykola Golub ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: rbd-nbd crashes Error: failed to read nbd request header: (33) Numerical argument out of domain

2021-05-18 Thread Mykola Golub
Could you please provide the full rbd-nbd log? If it is too large for the attachment then may be via some public url? -- Mykola Golub On Tue, May 18, 2021 at 03:04:51PM +0800, Zhi Zhang wrote: > Hi guys, > > We are recently testing rbd-nbd using ceph N version. After map rbd > ima

[ceph-users] Re: rbd-nbd crashes Error: failed to read nbd request header: (33) Numerical argument out of domain

2021-05-19 Thread Mykola Golub
On Wed, May 19, 2021 at 11:32:04AM +0800, Zhi Zhang wrote: > On Wed, May 19, 2021 at 11:19 AM Zhi Zhang > wrote: > > > > > On Tue, May 18, 2021 at 10:58 PM Mykola Golub > > wrote: > > > > > > Could you please provide the full rbd-nbd log? If it is t

[ceph-users] Re: Can not activate some OSDs after upgrade (bad crc on label)

2023-12-20 Thread Mykola Golub
ased, i.e. "bluestore block device" safety header but not the ending "\n" (0x0a) byte which is also a part of this header. We advised to restore the full header (23 bytes) just for safety but 22 bytes would be enough too. -- Mykola Golub

[ceph-users] Re: rbd export-diff/import-diff hangs

2023-08-27 Thread Mykola Golub
rom_snap}_{to_snap}.log --log-to-stderr=false Hope it will not use too much space and you will be able to get a log for a getting stuck case. Then please provide the log for review somehow. Also, notice the time when you interrupt the hanging export-diff. -- Mykola Golub __

[ceph-users] Re: MGR executes config rm all the time

2023-09-10 Thread Mykola Golub
_purge_schedule"}]: > dispatch > > 9/10/23 10:02:24 AM[INF]from='mgr.252911336 ' entity='mgr.ceph-25' > cmd=[{"prefix":"config > rm","who":"mgr","name":"mgr/rbd_support/ceph-25/mirror_snapshot_schedule"}]:

[ceph-users] Re: backfill_wait preventing deep scrubs

2023-09-21 Thread Mykola Golub
n time" messages are piling up. > > Is there a way to allow (deep) scrub in this situation? ceph config set osd osd_scrub_during_recovery true -- Mykola Golub ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: 16.2.7 pacific QE validation status, RC1 available for testing

2021-11-30 Thread Mykola Golub
; look and approve? I don't have much experience with krbd tests but the failures indeed look like known issues, not related to pacific update. rbd - LGTM. Thanks. -- Mykola Golub ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: rbd image usage per osd

2019-08-14 Thread Mykola Golub
ing `rados` and `ceph osd map` you could ssh to every osd node and use e.g. find+stat to get necessary data for all files (objects) with $block_name_prefix in their name. -- Mykola Golub ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscr

[ceph-users] Re: Mgr stability

2019-08-15 Thread Mykola Golub
so it would be nice to check this first. I also see you have rbd_support module enabled. It would be good to have it temporary disabled during this experiment too. -- Mykola Golub ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send a