[ceph-users] Re: report librbd bug export-diff

2020-01-03 Thread Jason Dillaman
Awesome, thanks for verifying! On Fri, Jan 3, 2020 at 4:54 AM zheng...@cmss.chinamobile.com wrote: > > Thanks Jason, you are right. My code didn't update in time, The bug has > fixed in https://tracker.ceph.com/issues/42248, > > > zheng...@cmss.chinamobile.com

[ceph-users] acting_primary is an osd with primary-affinity of 0, which seems wrong

2020-01-03 Thread Wesley Dillingham
In an exploration of trying to speedup the long tail of backfills resulting from marking a failing OSD out I began looking at my PGs to see if i could tune some settings and noticed the following: Scenario: on a 12.2.12 Cluster, I am alerted of an inconsistent PG and am alerted of SMART failures o

[ceph-users] rgw multisite rebuild

2020-01-03 Thread Frank R
Hi all, It looks like I have an RGW multisite setup that I need to rebuild to get metadata syncing again (I did some stupid things to break it). If it possible to remove the slave zone from the zonegroup and then re-add it without destroying the rgw data pool (bucket data)? thx Frank ___

[ceph-users] Re: radosgw - Etags suffixed with #x0e

2020-01-03 Thread Ingo Reimann
Hi Paul, thanks for your advise. I didn`t expect this consequences, as there is no hint in the release notes. On Monday, we will perform the overall upgrade to nautilus. Nevertheless, there seems to be a bug in radosgw: If an ETag contains characters below 0x20, they are coded as eg. . This

[ceph-users] Re: Experience with messenger v2 in Nautilus

2020-01-03 Thread Eneko Lacunza
Hi Stefan, El 2/1/20 a las 10:47, Stefan Kooman escribió: I'm wondering how many of are using messenger v2 in Nautilus after upgrading from a previous release (Luminous / Mimic). Does it work for you? Or why did you not enable it (yet)? Our hyperconverged office cluster (Proxmox) with 5 nodes

[ceph-users] Re: Mimic 13.2.8 deep scrub error: "size 333447168 > 134217728 is too large"

2020-01-03 Thread Robert Sander
Hi, On 02.01.20 22:18, Paul Emmerich wrote: > You got a ~300MB object in there. BlueStore's default limit is 128MB > (option name that controls it is osd_max_object_size) > > I think the scrub warning is new/was backported, so the object is > probably older than BlueStore on this cluster, it's on