[ceph-users] Re: 1 Large omap object found

2023-08-01 Thread Mark Johnson
Thanks Eugen, massive help. Working on identifying and cleaning up old/empty buckets now. On Wed, 2023-08-02 at 06:10 +, Eugen Block wrote: Correct, only a deep-scrub will check that threshold. 'ceph config set' is persistent, a daemon restart will preserve the new config value. Zitat von

[ceph-users] Re: Disk device path changed - cephadm faild to apply osd service

2023-08-01 Thread Eugen Block
Do you really need device paths in your configuration? You could use other criteria like disk sizes, vendors, rotational flag etc. If you really want device paths you'll probably need to ensure they're persistent across reboots via udev rules. Zitat von Kilian Ries : Hi, it seems that a

[ceph-users] Re: 1 Large omap object found

2023-08-01 Thread Eugen Block
Correct, only a deep-scrub will check that threshold. 'ceph config set' is persistent, a daemon restart will preserve the new config value. Zitat von Mark Johnson : Never mind, I think I worked it out.  I consulted the Quincy documentation which just said to do this: ceph config set osd osd_

[ceph-users] Re: Upgrading nautilus / centos7 to octopus / ubuntu 20.04. - Suggestions and hints?

2023-08-01 Thread Eugen Block
Hi, from Ceph perspective it's supported to upgrade from N to P, you can safely skip O. We have done that on several clusters without any issues. You just need to make sure that your upgrade to N was complete. Just a few days ago someone tried to upgrade from O to Q with "require-osd-rele

[ceph-users] Re: 1 Large omap object found

2023-08-01 Thread Mark Johnson
Never mind, I think I worked it out.  I consulted the Quincy documentation which just said to do this: ceph config set osd osd_deep_scrub_large_omap_object_key_threshold 200 But when i did that, the health warning didn't clear. I took a guess that maybe I needed to trigger a deep scrub on t

[ceph-users] Re: ref v18.2.0 QE Validation status

2023-08-01 Thread Venky Shankar
Hi Yuri, On Tue, Aug 1, 2023 at 10:34 PM Venky Shankar wrote: > > On Tue, Aug 1, 2023 at 5:55 PM Venky Shankar wrote: > > > > On Tue, Aug 1, 2023 at 1:21 AM Yuri Weinstein wrote: > > > > > > Pls see the updated test results and Release Notes PR > > > https://github.com/ceph/ceph/pull/52490 > >

[ceph-users] Re: 1 Large omap object found

2023-08-01 Thread Mark Johnson
Regarding changing this bvalue back to the previous default of 2,000,000, how would I go about doing that? I tried following that SUSE KB article which says to do this: ceph tell 'osd.*' injectargs --osd_deep_scrub_large_omap_object_key_threshold=200 But while that didn't fail as such, it

[ceph-users] Re: ref v18.2.0 QE Validation status

2023-08-01 Thread Brad Hubbard
On Mon, Jul 31, 2023 at 1:46 AM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/62231#note-1 > > Seeking approvals/reviews for: > > smoke - Laura, Radek > rados - Neha, Radek, Travis, Ernesto, Adam King > rgw - Casey > fs - Venky > orch -

[ceph-users] Re: 1 Large omap object found

2023-08-01 Thread Mark Johnson
Thanks for that. That's pretty much how I was reading it, but the text you provided is a lot more explanatory than what I'd managed to find and makes it a bit clearer. Without going into too much detail, yes we do have a single user that is used to create multiple a bucket for each of a multip

[ceph-users] Re: Upgrading nautilus / centos7 to octopus / ubuntu 20.04. - Suggestions and hints?

2023-08-01 Thread Bailey Allison
Hi Götz, We’ve done a similar process which involves going from starting at CentOS 7 Nautilus and upgrading to Rocky 8/Ubuntu 20.04 Octopus+. What we do is start on CentOS 7 Nautilus we upgrade to Octopus on CentOS 7 (we’ve built python packages and have them on our repo to satisfy some c

[ceph-users] Re: Upgrading nautilus / centos7 to octopus / ubuntu 20.04. - Suggestions and hints?

2023-08-01 Thread Marc
> > As I’v read and thought a lot about the migration as this is a bigger > project, I was wondering if anyone has done that already and might share > some notes or playbooks, because in all readings there where some parts > missing or miss understandable to me. > > I do have some different appro

[ceph-users] Re: Upgrading nautilus / centos7 to octopus / ubuntu 20.04. - Suggestions and hints?

2023-08-01 Thread Boris Behrens
Hi Goetz, I've done the same, and went to Octopus and to Ubuntu. It worked like a charm and with pip, you can get the pecan library working. I think I did it with this: yum -y install python36-six.noarch python36-PyYAML.x86_64 pip3 install pecan werkzeug cherrypy Worked very well, until we got hit

[ceph-users] Upgrading nautilus / centos7 to octopus / ubuntu 20.04. - Suggestions and hints?

2023-08-01 Thread Götz Reinicke
Hi, As I’v read and thought a lot about the migration as this is a bigger project, I was wondering if anyone has done that already and might share some notes or playbooks, because in all readings there where some parts missing or miss understandable to me. I do have some different approaches i

[ceph-users] Disk device path changed - cephadm faild to apply osd service

2023-08-01 Thread Kilian Ries
Hi, it seems that after reboot / OS update my disk labels / device paths may have changed. Since then i get an error like this: CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): osd.osd-12-22_hdd-2 ### RuntimeError: cephadm exited with an error code: 1, stderr:Non-zero exit code 1

[ceph-users] Re: ref v18.2.0 QE Validation status

2023-08-01 Thread Venky Shankar
On Tue, Aug 1, 2023 at 5:55 PM Venky Shankar wrote: > > On Tue, Aug 1, 2023 at 1:21 AM Yuri Weinstein wrote: > > > > Pls see the updated test results and Release Notes PR > > https://github.com/ceph/ceph/pull/52490 > > > > Still seeking approvals: > > smoke - Laura, Radek, Venky > > rados - Radek

[ceph-users] Re: ref v18.2.0 QE Validation status

2023-08-01 Thread Laura Flores
Rados failures are summarized here: https://tracker.ceph.com/projects/rados/wiki/REEF#Reef-v1820 All are known. Will let Radek give the final ack. On Tue, Aug 1, 2023 at 9:05 AM Nizamudeen A wrote: > dashboard approved! failure is unrelated and tracked via > https://tracker.ceph.com/issues/5894

[ceph-users] veeam backup on rgw - error - op->ERRORHANDLER: err_no=-2 new_err_no=-2

2023-08-01 Thread xadhoom76
Hi , we have random error with rgw during a backup from veeam . Daemons go in error state. Where we can find the appropriate logs about it ? I just find out something related to this -1788> 2023-07-31T19:51:21.169+ 7f04567d3700 2 req 10656715914266436796 0.0s getting op 0 -1787> 2

[ceph-users] Re: ref v18.2.0 QE Validation status

2023-08-01 Thread Nizamudeen A
dashboard approved! failure is unrelated and tracked via https://tracker.ceph.com/issues/58946 Regards, Nizam On Sun, Jul 30, 2023 at 9:16 PM Yuri Weinstein wrote: > Details of this release are summarized here: > > https://tracker.ceph.com/issues/62231#note-1 > > Seeking approvals/reviews for:

[ceph-users] Re: ref v18.2.0 QE Validation status

2023-08-01 Thread Venky Shankar
On Tue, Aug 1, 2023 at 1:21 AM Yuri Weinstein wrote: > > Pls see the updated test results and Release Notes PR > https://github.com/ceph/ceph/pull/52490 > > Still seeking approvals: > smoke - Laura, Radek, Venky > rados - Radek, Laura, Nizamudeen > fs - Venky > orch - Adam King > powercycle - Brad

[ceph-users] Re: 1 Large omap object found

2023-08-01 Thread Eugen Block
Thanks. Just for reference I'm quoting the SUSE doc [1] you mentioned because it explains what you already summarized: User indices are not sharded, in other words we store all the keys of names of buckets under one object. This can cause large objects to be found. The large object is only

[ceph-users] Re: 1 Large omap object found

2023-08-01 Thread Mark Johnson
Here you go. It doesn't format very well, so I'll summarize what I'm seeing. 5.c has 78051 OMAP_BYTES and 398 OMAP_KEYS 5.16 has 80186950 OMAP_BYTES and 401505 OMAP_KEYS The remaining 30 PGS have zero of both. However, the BYTES for each PG is very much the same at around 890 for each.

[ceph-users] Re: MDS nodes blocklisted

2023-08-01 Thread Eugen Block
You could add (debug) logs for starters ;-) There was a thread [1] describing something quite similar, pointing to this bug report [2]. In recent versions it's supposed to be fixed although I don't see the tracker or PR number in the release notes of both pacific and quincy. Can you verify