[ceph-users] Re: Different behaviors for ceph kernel client in limiting IOPS when data pool enters `nearfull`?

2023-11-16 Thread Xiubo Li
On 11/16/23 22:39, Ilya Dryomov wrote: On Thu, Nov 16, 2023 at 3:21 AM Xiubo Li wrote: Hi Matt, On 11/15/23 02:40, Matt Larson wrote: On CentOS 7 systems with the CephFS kernel client, if the data pool has a `nearfull` status there is a slight reduction in write speeds (possibly 20-50% fewer

[ceph-users] Re: Problem while upgrade 17.2.6 to 17.2.7

2023-11-16 Thread Nizamudeen A
Hi, I think it should be in /var/log/ceph/ceph-mgr..log, probably you can reproduce this error again and hopefully you'll be able to see a python traceback or something related to rgw in the mgr logs. Regards On Thu, Nov 16, 2023 at 7:43 PM Jean-Marc FONTANA wrote: > Hello, > > These are the l

[ceph-users] Does cephfs ensure close-to-open consistency after enabling lazyio?

2023-11-16 Thread Jianjun Zheng
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] CFP closing soon: Everything Open 2024 (Gladstone, Queensland, Australia, April 16-18)

2023-11-16 Thread Tim Serong
Everything Open (auspiced by Linux Australia) is happening again in 2024. The CFP closes at the end of this weekend (November 19): https://2024.everythingopen.au/programme/proposals/ More details below. Forwarded Message Date: Sun, 15 Oct 2023 09:16:31 +1000 From: Everythi

[ceph-users] Re: Upgrading From RHCS v4 to OSS Ceph

2023-11-16 Thread Torkil Svensgaard
On 13-11-2023 22:35, jarul...@uwyo.edu wrote: Hi everyone, I have a storage cluster running RHCS v4 (old, I know) and am looking to upgrade it soon. I would also like to migrate from RHCS to the open source version of Ceph at some point, as our support contract with RedHat for Ceph is likely g

[ceph-users] really need help how to save old client out of hang?

2023-11-16 Thread zxcs
Hi, Experts, we have an cephfs cluster 16.2.* run with multi active mds, and we have some old machine run with ubuntu 16.04 , so we mount these client using ceph-fuse. After a full mds process restart, all of these old ubuntu 16.04 clients cannot connect to ceph , `ls -lrth` or `df -hT` hang o

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-16 Thread Travis Nielsen
Rook already ran the tests against Guillaume's change directly, it looks good to us. I don't see a new latest-reef-devel image tag yet, but will plan on rerunning the tests when that tag is updated. Thanks, Travis On Thu, Nov 16, 2023 at 8:27 AM Adam King wrote: > Guillaume ran that patch throu

[ceph-users] Re: Different behaviors for ceph kernel client in limiting IOPS when data pool enters `nearfull`?

2023-11-16 Thread Ilya Dryomov
On Thu, Nov 16, 2023 at 5:26 PM Matt Larson wrote: > > Ilya, > > Thank you for providing these discussion threads on the Kernel fixes for > where there was a change and details on this affects the clients. > > What is the expected behavior in CephFS client when there are multiple data > pools

[ceph-users] Re: Different behaviors for ceph kernel client in limiting IOPS when data pool enters `nearfull`?

2023-11-16 Thread Matt Larson
Ilya, Thank you for providing these discussion threads on the Kernel fixes for where there was a change and details on this affects the clients. What is the expected behavior in CephFS client when there are multiple data pools in the CephFS? Does having 'nearfull' in any data pool in the CephFS

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-16 Thread Adam King
Guillaume ran that patch through the orch suite earlier today before merging. I think we should be okay on that front. The issue it's fixing was also particular to rook iirc, which teuthology doesn't cover. On Thu, Nov 16, 2023 at 10:18 AM Yuri Weinstein wrote: > OK I will start building. > > Tr

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-16 Thread Yuri Weinstein
OK I will start building. Travis, Adam King - any need to rerun any suites? On Thu, Nov 16, 2023 at 7:14 AM Guillaume Abrioux wrote: > > Hi Yuri, > > > > Backport PR [2] for reef has been merged. > > > > Thanks, > > > > [2] https://github.com/ceph/ceph/pull/54514/files > > > > -- > > Guillaume A

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-16 Thread Guillaume Abrioux
Hi Yuri, Backport PR [2] for reef has been merged. Thanks, [2] https://github.com/ceph/ceph/pull/54514/files -- Guillaume Abrioux Software Engineer From: Guillaume Abrioux Date: Wednesday, 15 November 2023 at 21:02 To: Yuri Weinstein , Nizamudeen A , Guillaume Abrioux , Travis Nielsen Cc: A

[ceph-users] Re: Debian 12 support

2023-11-16 Thread kefu chai
On Thu, Nov 16, 2023 at 2:31 AM Gregory Farnum wrote: > There are versioning and dependency issues (both of packages, and compiler > toolchain pieces) which mean that the existing reef releases do not build > on Debian. Our upstream support for Debian has always been inconsistent > i think, to b

[ceph-users] Re: Different behaviors for ceph kernel client in limiting IOPS when data pool enters `nearfull`?

2023-11-16 Thread Ilya Dryomov
On Thu, Nov 16, 2023 at 3:21 AM Xiubo Li wrote: > > Hi Matt, > > On 11/15/23 02:40, Matt Larson wrote: > > On CentOS 7 systems with the CephFS kernel client, if the data pool has a > > `nearfull` status there is a slight reduction in write speeds (possibly > > 20-50% fewer IOPS). > > > > On a simi

[ceph-users] Re: Problem while upgrade 17.2.6 to 17.2.7

2023-11-16 Thread Jean-Marc FONTANA
Hello, These are the last lines of /var/log/ceph/cephadm.log of the active mgr machine after an error occured. As I don't feel this will be very helpfull, would you please tell us where to look ? Best regards, JM Fontana 2023-11-16 14:45:08,200 7f341eae8740 DEBUG --

[ceph-users] Re: [CEPH] OSD Memory Usage

2023-11-16 Thread Nguyễn Hữu Khôi
Hello. Great Information, I will keep in mind. Thank you :) Nguyen Huu Khoi On Thu, Nov 16, 2023 at 5:51 PM Janne Johansson wrote: > Den tors 16 nov. 2023 kl 08:43 skrev Nguyễn Hữu Khôi > : > > > > Hello, > > Yes, I see it does not exceed RSS but I see in "ceph orch ps". it is > over > > targe

[ceph-users] Re: How to configure something like osd_deep_scrub_min_interval?

2023-11-16 Thread Frank Schilder
Partially answering my own question. I think it is possible to tweak the existing parameters to achieve what I'm looking for on average. The main reason I want to use the internal scheduler is the high number of PGs on some pools, which I actually intend to increase even further. For such pools

[ceph-users] Re: Problem while upgrade 17.2.6 to 17.2.7

2023-11-16 Thread Nizamudeen A
Hello, can you also add the mgr logs at the time of this error? Regards, On Thu, Nov 16, 2023 at 4:12 PM Jean-Marc FONTANA wrote: > Hello David, > > We tried what you pointed in your message. First, it was set to > > "s3, s3website, swift, swift_auth, admin, sts, iam, subpub" > > We tried to s

[ceph-users] Re: [CEPH] OSD Memory Usage

2023-11-16 Thread Janne Johansson
Den tors 16 nov. 2023 kl 08:43 skrev Nguyễn Hữu Khôi : > > Hello, > Yes, I see it does not exceed RSS but I see in "ceph orch ps". it is over > target. Does Mem Use include cache, I am right? > > NAMEHOST PORTSSTATUS REFRESHED > AGE MEM USE MEM LIM VER

[ceph-users] Re: Debian 12 support

2023-11-16 Thread Luke Hall
That's really useful to know, thanks Daniel. On 15/11/2023 19:07, Daniel Baumann wrote: On 11/15/23 19:52, Daniel Baumann wrote: for 18.2.0, there's only one trivial thing needed: https://git.progress-linux.org/packages/graograman-backports-extras/ceph/commit/?id=ed59c69244ec7b81ec08f7a2d1a1f0a

[ceph-users] Re: remove spurious data

2023-11-16 Thread Janne Johansson
Den tors 16 nov. 2023 kl 00:30 skrev Giuliano Maggi : > Hi, > > I’d like to remove some “spurious" data: > > root@nerffs03:/# ceph df > --- RAW STORAGE --- > CLASS SIZEAVAILUSED RAW USED %RAW USED > hdd1.0 PiB 1.0 PiB 47 GiB47 GiB 0 > TOTAL 1.0 PiB 1.0 PiB 47 GiB

[ceph-users] Re: Problem while upgrade 17.2.6 to 17.2.7

2023-11-16 Thread Jean-Marc FONTANA
Hello David, We tried what you pointed in your message. First, it was set to "s3, s3website, swift, swift_auth, admin, sts, iam, subpub" We tried to set it to "s3, s3website, swift, swift_auth, admin, sts, iam, subpub, notifications" and then to "s3, s3website, swift, swift_auth, admin, sts,

[ceph-users] Re: [CEPH] OSD Memory Usage

2023-11-16 Thread Nguyễn Hữu Khôi
Hello. I will read more about it. Thank you :) Nguyen Huu Khoi On Thu, Nov 16, 2023 at 3:21 PM Zakhar Kirpichenko wrote: > Orch ps seems to show virtual size set instead of resident size set. > > /Z > > On Thu, 16 Nov 2023 at 09:43, Nguyễn Hữu Khôi > wrote: > >> Hello, >> Yes, I see it does

[ceph-users] planning upgrade from pacific to quincy

2023-11-16 Thread Simon Oosthoek
Hi All (apologies if you get again, I suspect mails from my @science.ru.nl account get dropped by most receiving mail servers, due to the strict DMARC policy (p=reject) in place) after a long while being in health_err state (due to an unfound object, which we eventually decided to "forget"), we ar

[ceph-users] Re: Ceph Leadership Team Meeting Minutes Nov 15, 2023

2023-11-16 Thread Janne Johansson
> Docs question: https://tracker.ceph.com/issues/11385: Can a member of the > community just raise a PR attempting to standardize commands, without > coordinating with a team? In this case I think I would recommend having both "rm" and "del" do the same thing. I agree that this kind of mixup cau

[ceph-users] Re: [CEPH] OSD Memory Usage

2023-11-16 Thread Zakhar Kirpichenko
Orch ps seems to show virtual size set instead of resident size set. /Z On Thu, 16 Nov 2023 at 09:43, Nguyễn Hữu Khôi wrote: > Hello, > Yes, I see it does not exceed RSS but I see in "ceph orch ps". it is over > target. Does Mem Use include cache, I am right? > > NAMEHOST