[ceph-users] Re: increasing number of (deep) scrubs

2023-12-12 Thread Szabo, Istvan (Agoda)
Hi, You are on octopus right? Istvan Szabo Staff Infrastructure Engineer --- Agoda Services Co., Ltd. e: istvan.sz...@agoda.com --- F

[ceph-users] Re: Is there any way to merge an rbd image's full backup and a diff?

2023-12-12 Thread Satoru Takeuchi
Hi Ilya, 2023年12月12日(火) 21:23 Ilya Dryomov : > Not at the moment. Mykola has an old work-in-progress PR which extends > "rbd import-diff" command to make this possible [1]. I didn't know this PR. Thank you very much.I'll evaluate this PR later. > Since you as > a user expected "rbd merge-diff"

[ceph-users] Re: Cephfs too many repaired copies on osds

2023-12-12 Thread zxcs
Also osd frequently report these ERROR logs, lead this osd has slow request. how to stop these log ? > “full object read crc *** != expected ox on :head” > “missing primary copy of ***: will try to read copies on **” Thanks xz > 2023年12月13日 01:20,zxcs 写道: > > Hi, Experts, > > w

[ceph-users] Cephfs too many repaired copies on osds

2023-12-12 Thread zxcs
Hi, Experts, we are using cephfs with 16.2.* with multi active mds, and recently we see an osd report “full object read crc *** != expected ox on :head” “missing primary copy of ***: will try to read copies on **” from `ceph -s`, could see OSD_TOO_MANY_REPAIRS: Too many repaired

[ceph-users] Re: mds.0.journaler.pq(ro) _finish_read got error -2

2023-12-12 Thread Eugen Block
Hi Patrick, this was all on version 17.2.7. The mon store had to be rebuilt from OSDs, so the MDS map got lost. After recovering the ceph cluster itself we inspected the journal and it reported missing objects before we continued with the disaster recovery, this was the output (I had post

[ceph-users] Re: mds.0.journaler.pq(ro) _finish_read got error -2

2023-12-12 Thread Patrick Donnelly
On Mon, Dec 11, 2023 at 6:38 AM Eugen Block wrote: > > Hi, > > I'm trying to help someone with a broken CephFS. We managed to recover > basic ceph functionality but the CephFS is still inaccessible > (currently read-only). We went through the disaster recovery steps but > to no avail. Here's a sni

[ceph-users] Re: mgr finish mon failed to return metadata for mds

2023-12-12 Thread Eugen Block
Can you restart the primary MDS (not sure which one it currently is, should be visible from the mds daemon log) and see if this resolves at least temporarily? Because after we recovered the cluster and cephfs we did have output in 'ceph fs status' and I can't remember seeing these error mes

[ceph-users] Announcing go-ceph v0.25.0

2023-12-12 Thread Anoop C S
We are happy to announce another release of the go-ceph API library. This is a regular release following our every-two-months release cadence. https://github.com/ceph/go-ceph/releases/tag/v0.25.0 More details are available at the link above. The library includes bindings that aim to play a si

[ceph-users] Re: increasing number of (deep) scrubs

2023-12-12 Thread Frank Schilder
Hi all, if you follow this thread, please see the update in "How to configure something like osd_deep_scrub_min_interval?" (https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/YUHWQCDAKP5MPU6ODTXUSKT7RVPERBJF/). I found out how to tune the scrub machine and I posted a quick update i

[ceph-users] Re: How to configure something like osd_deep_scrub_min_interval?

2023-12-12 Thread Frank Schilder
Hi all, a little gem for Christmas. After going through the OSD code, scratching my had and doing a bit of maths, I seem to have found a way to tune the built-in scrub machine to work perfectly. Its only few knobs to turn, but its difficult to find out, because the documentation is misleading t

[ceph-users] Re: Is there any way to merge an rbd image's full backup and a diff?

2023-12-12 Thread Ilya Dryomov
On Tue, Dec 12, 2023 at 1:03 AM Satoru Takeuchi wrote: > > Hi, > > I'm developing RBD images' backup system. In my case, a backup data > must be stored at least two weeks. To meet this requirement, I'd like > to take backups as follows: > > 1. Take a full backup by rbd export first. > 2. Take a di

[ceph-users] Re: mds.0.journaler.pq(ro) _finish_read got error -2 [solved]

2023-12-12 Thread Malte Stroem
Hi Eugen, thanks a lot for showing the solution. On 12.12.23 08:56, Eugen Block wrote: cephfs-journal-tool --rank=cephfs:0 --journal=purge_queue journal reset Best, Malte ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email

[ceph-users] mgr finish mon failed to return metadata for mds

2023-12-12 Thread Manolis Daramas
The current ceph version that we use is 17.2.7. We see in the Manager logs the below errors: 2 mgr.server handle_open ignoring open from mds.storage.node01.zjltbu v2:10.40.99.11:6800/1327026642; not ready for session (expect reconnect) 0 7faf43715700 1 mgr finish mon failed to return metadata f

[ceph-users] Re: Disable signature url in ceph rgw

2023-12-12 Thread Marc Singer
Hi First, all requests with presigned URLs should be restricted. This is how the request is blocked with the nginx sidecar (it's just a simple parameter in the URL that is forbidden): if ($arg_Signature) { return 403 'Signature parameter forbidden'; } Our bucket policies are created automat