[ceph-users] Re: Octopus on Ubuntu 20.04.6 LTS with kernel 5

2023-05-11 Thread Gerdriaan Mulder
As a data point: we've been running Octopus (solely for CephFS) on Ubuntu 20.04 with 5.4.0(-122) for some time now, with packages from download.ceph.com. On 11/05/2023 07.12, Szabo, Istvan (Agoda) wrote: I can answer my question, even in the official ubuntu repo they are using by default the

[ceph-users] Re: Octopus on Ubuntu 20.04.6 LTS with kernel 5

2023-05-11 Thread Ilya Dryomov
On Thu, May 11, 2023 at 7:13 AM Szabo, Istvan (Agoda) wrote: > > I can answer my question, even in the official ubuntu repo they are using by > default the octopus version so for sure it works with kernel 5. > > https://packages.ubuntu.com/focal/allpackages > > > -Original Message- > From

[ceph-users] Re: mds dump inode crashes file system

2023-05-11 Thread Frank Schilder
Dear Xiubo, thanks for your reply. > BTW, did you enabled the async dirop ? Currently this is disabled by > default in 4.18.0-486.el8.x86_64. I have never heard about that option until now. How do I check that and how to I disable it if necessary? I'm in meetings pretty much all day and will t

[ceph-users] Re: mds dump inode crashes file system

2023-05-11 Thread Frank Schilder
Dear Gregory, > I would start by looking at what xattrs exist and if there's an obvious bad > one, deleting it. I can't see any obvious bad ones and I also can't just delete them, they are required for ACLs. I'm not convinced that one of the xattrs that can be dumped with 'getfattr -d -m ".*"'

[ceph-users] Re: mds dump inode crashes file system

2023-05-11 Thread Frank Schilder
Dear Gregory, sorry, forgot one question: I would like to create an intact copy of the file and move the "broken" one to a different location, hopefully being able to reproduce and debug the issue with the moved one. I need to preserve all info that samba attached to the file. If I do a "cp -p"

[ceph-users] Re: Octopus on Ubuntu 20.04.6 LTS with kernel 5

2023-05-11 Thread Anthony D'Atri
As a KRDB client, I believe that 5.4 also introduces better support for RBD features including fast-diff > On May 11, 2023, at 3:59 AM, Gerdriaan Mulder wrote: > > As a data point: we've been running Octopus (solely for CephFS) on Ubuntu > 20.04 with 5.4.0(-122) for some time now, with packag

[ceph-users] Re: mds dump inode crashes file system

2023-05-11 Thread Frank Schilder
Dear Xiubo, please see also my previous e-mail about the async dirop config. I have a bit more log output from dmesg on the file server here: https://pastebin.com/9Y0EPgDD . This covers a reboot after the one in my previous e-mail as well as another fail at the end. When I checked around 16:30

[ceph-users] Discussion thread for Known Pacific Performance Regressions

2023-05-11 Thread Mark Nelson
Hi Everyone, This email was originally posted to d...@ceph.io, but Marc mentioned that he thought this would be useful to post on the user list so I'm re-posting here as well. David Orman mentioned in the CLT meeting this morning that there are a number of people on the mailing list asking a

[ceph-users] Re: Radosgw multisite replication issues

2023-05-11 Thread Casey Bodley
On Tue, May 9, 2023 at 3:11 PM Tarrago, Eli (RIS-BCT) wrote: > > East and West Clusters have been upgraded to quincy, 17.2.6. > > We are still seeing replication failures. Deep diving the logs, I found the > following interesting items. > > What is the best way to continue to troubleshoot this?

[ceph-users] ceph status is warning with "266 pgs not deep-scrubbed in time"

2023-05-11 Thread Louis Koo
ceph version is: v16.2.10 But I had close the ration with "ceph config set mon mon_warn_pg_not_deep_scrubbed_ratio 0" show the prints: [root@smd-node01 deeproute]# ceph config get mon mon_warn_pg_not_deep_scrubbed_ratio 0.00 ___ ceph-users mailing

[ceph-users] Re: 17.2.6 Dashboard/RGW Signature Mismatch

2023-05-11 Thread Ondřej Kukla
Hello, I have found out that the issue seems to be in this change - https://github.com/ceph/ceph/pull/47207 When I’ve commented out the change and replaced it with the previous value the dashboard works as expected. Ondrej ___ ceph-users mailing list

[ceph-users] Recovering from OSD with corrupted DB

2023-05-11 Thread Jessica Sol
Hey all, I recently had two OSDs fail. The first one I just removed and expected replication to fix for me. Replication froze and I restarted the OSDs after seeing heartbeat failures. It allowed replication to resume but one of the OSD's RocksDB became corrupted, showing this error when I try to b

[ceph-users] CFP NOW OPEN: Ceph Days Vancouver Co-located with OpenInfra Summit

2023-05-11 Thread Mike Perez
Hi everyone, I invite you to Ceph Days Vancouver on June 15, co-located with the OpenInfra Summit. Ceph Days are one-day events dedicated to multiple breakout and BoF sessions with a wide range of topics around Ceph. You can receive a nice discount to attend Ceph Days and the OpenInfra Summit i

[ceph-users] multisite synchronization and multipart uploads

2023-05-11 Thread Yixin Jin
Hi folks, ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] multisite sync and multipart uploads

2023-05-11 Thread Yixin Jin
Hi guys, With Quincy release, does anyone know how multisite sync deals with multipart uploads? I mean those part objects of some incomplete multipart uploads. Are those objects also sync-ed over either with full-sync or incremental sync? I did a quick experiment and notice that these objects a

[ceph-users] Re: multisite sync and multipart uploads

2023-05-11 Thread Casey Bodley
sync doesn't distinguish between multipart and regular object uploads. once a multipart upload completes, sync will replicate it as a single object using an s3 GetObject request replicating the parts individually would have some benefits. for example, when sync retries are necessary, we might only

[ceph-users] Re: Lua scripting in the rados gateway

2023-05-11 Thread Yuval Lifshitz
following PRs should be addressing the issues (feel free to review): the lib64 problem on centos8: https://github.com/ceph/ceph/pull/51453 missing dependencies on cephadm image: https://github.com/ceph/ceph-container/pull/2117 "script put" documentation: https://github.com/ceph/ceph/pull/51422 On

[ceph-users] Re: docker restarting lost all managers accidentally

2023-05-11 Thread Ben
along the path you mentioned, it is fixed by changing the owner of /var/lib/ceph to 167:167 from root. The cluster was deployed with non root user, and files permission is in a bit of mess. After the change systemctl daemon-reload and restart brings it up. for another manager in bootstrap host, jo

[ceph-users] Re: client isn't responding to mclientcaps(revoke), pending pAsLsXsFsc issued pAsLsXsFsc

2023-05-11 Thread Xiubo Li
On 5/10/23 19:35, Frank Schilder wrote: Hi Xiubo. IMO evicting the corresponding client could also resolve this issue instead of restarting the MDS. Yes, it can get rid of the stuck caps release request, but it will also make any process accessing the file system crash. After a client evicti

[ceph-users] Re: mds dump inode crashes file system

2023-05-11 Thread Xiubo Li
On 5/11/23 18:26, Frank Schilder wrote: Dear Xiubo, thanks for your reply. BTW, did you enabled the async dirop ? Currently this is disabled by default in 4.18.0-486.el8.x86_64. I have never heard about that option until now. How do I check that and how to I disable it if necessary? I'm in

[ceph-users] Re: mds dump inode crashes file system

2023-05-11 Thread Xiubo Li
On 5/11/23 20:12, Frank Schilder wrote: Dear Xiubo, please see also my previous e-mail about the async dirop config. I have a bit more log output from dmesg on the file server here:https://pastebin.com/9Y0EPgDD . 1. [Wed May 10 16:03:06 2023] ceph: corrupt snap message from mds1 2. [