[ceph-users] ??????Garbage Collection on Luminous

2020-12-07 Thread Zacharias Turing
Hi, Priya     there was a GC related issue has been fixed :     https://tracker.ceph.com/issues/38454     Luminous backport:      https://tracker.ceph.com/issues/38714     PR is here:     https://github.com/ceph/ceph/pull/26601     try update your cluster to latest Luminous release  -

[ceph-users] Re: MDS lost, Filesystem degraded and wont mount

2020-12-07 Thread Patrick Donnelly
On Mon, Dec 7, 2020 at 1:28 PM Janek Bevendorff wrote: > > > > This sounds like there is one or a few clients acquiring too many > > caps. Have you checked this? Are there any messages about the OOM > > killer? What config changes for the MDS have you made? > > Yes, it's individual clients acquiri

[ceph-users] Re: Provide more documentation for MDS performance tuning on large file systems

2020-12-07 Thread Janek Bevendorff
Hi Patrick, I haven't gone through this thread yet but I want to note for those reading that we do now have documentation (thanks for the frequent pokes Janek!) for the recall configurations: https://docs.ceph.com/en/latest/cephfs/cache-configuration/#mds-recall Please let us know if it's missi

[ceph-users] Re: MDS lost, Filesystem degraded and wont mount

2020-12-07 Thread Janek Bevendorff
This sounds like there is one or a few clients acquiring too many caps. Have you checked this? Are there any messages about the OOM killer? What config changes for the MDS have you made? Yes, it's individual clients acquiring too my caps. I first ran the adjusted recall settings you suggeste

[ceph-users] Re: guest fstrim not showing free space

2020-12-07 Thread Marc Roos
Yes, use everywhere virtio-scsi (via kvm with discard='unmap'). 'lsblk --discard' also shows discard is supported. vm's with xfs filesystem seem to behave better. -Original Message- Cc: lordcirth; ceph-users Subject: Re: [ceph-users] Re: guest fstrim not showing free space What driver

[ceph-users] Re: guest fstrim not showing free space

2020-12-07 Thread John Petrini
I should have started out by asking how is the RBD mounted? Directly on a host of through a hypervisor like KVM? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: guest fstrim not showing free space

2020-12-07 Thread John Petrini
What driver did you use to mount the volumes? I believe only virtio-scsi supports fstrim commands. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: guest fstrim not showing free space

2020-12-07 Thread Marc Roos
Is there a solution for this? Because I have some more old vm's having x100GB's of free fs. -Original Message- To: lordcirth Cc: ceph-users Subject: [ceph-users] Re: guest fstrim not showing free space Yes! Indeed old one, with ext4 still. -Original Message- Sent: Monday, D

[ceph-users] Re: Provide more documentation for MDS performance tuning on large file systems

2020-12-07 Thread Patrick Donnelly
Hi Dan & Janek, On Sat, Dec 5, 2020 at 6:26 AM Dan van der Ster wrote: > My understanding is that the recall thresholds (see my list below) > should be scaled proportionally. OTOH, I haven't played with the decay > rates (and don't know if there's any significant value to tuning > those). I have

[ceph-users] Re: MDS lost, Filesystem degraded and wont mount

2020-12-07 Thread Patrick Donnelly
On Sat, Dec 5, 2020 at 5:41 AM Janek Bevendorff wrote: > > On 05/12/2020 09:26, Dan van der Ster wrote: > > Hi Janek, > > > > I'd love to hear your standard maintenance procedures. Are you > > cleaning up those open files outside of "rejoin" OOMs ? > > No, of course not. But those rejoin problems

[ceph-users] Re: Larger number of OSDs, cheroot, cherrypy, limits + containers == broken

2020-12-07 Thread Ken Dreyer
Thanks for bringing this up. We need to update Cheroot in Fedora and EPEL 8. I've opened https://src.fedoraproject.org/rpms/python-cheroot/pull-request/3 to get this into Fedora first. I've published an el8 RPM at https://fedorapeople.org/~ktdreyer/bz1868629/ for early testing. I can bring up a "

[ceph-users] Larger number of OSDs, cheroot, cherrypy, limits + containers == broken

2020-12-07 Thread David Orman
Hi, We have a ceph 15.2.7 deployment using cephadm under podman w/ systemd. We've run into what we believe is: https://github.com/ceph/ceph-container/issues/1748 https://tracker.ceph.com/issues/47875 In our case, eventually the mgr container stops emitting output/logging. We are polling with ext

[ceph-users] Re: guest fstrim not showing free space

2020-12-07 Thread Marc Roos
Yes! Indeed old one, with ext4 still. -Original Message- Sent: Monday, December 07, 2020 3:58 PM Cc: ceph-users Subject: Re: [ceph-users] guest fstrim not showing free space Is the VM's / ext4? On Sun., Dec. 6, 2020, 12:57 p.m. Marc Roos, wrote: I have a 74GB vm with 34466

[ceph-users] Re: guest fstrim not showing free space

2020-12-07 Thread Nathan Fish
Is the VM's / ext4? On Sun., Dec. 6, 2020, 12:57 p.m. Marc Roos, wrote: > > > I have a 74GB vm with 34466MB free space. But when I do fstrim / 'rbd > du' shows still 60GB used. > When I fill the 34GB of space with an image, delete it and do again the > fstrim 'rbd du' still shows 59GB used. > >

[ceph-users] Ceph in FIPS Validated Environment

2020-12-07 Thread Van Alstyne, Kenneth
All: I recently was tasked with building and implementing Ceph in an environment where FIPS cryptography is strictly enforced. As such, I ran into several issues regarding Ceph's use of low-level cryptographic functions since those are strictly forbidden when OpenSSL is in FIPS mode. The

[ceph-users] Re: Provide more documentation for MDS performance tuning on large file systems

2020-12-07 Thread Janek Bevendorff
Never mind, when I enable it on a more busy directory, I do see new ephemeral pins popping up. Just not on the directories I set it on originally. Let's see how that holds up. On 07/12/2020 13:04, Janek Bevendorff wrote: Thanks. I tried playing around a bit with mds_export_ephemeral_distribute

[ceph-users] Re: Provide more documentation for MDS performance tuning on large file systems

2020-12-07 Thread Janek Bevendorff
Thanks. I tried playing around a bit with mds_export_ephemeral_distributed just now, because it's pretty much the same thing that your script does manually. Unfortunately, it seems to have no effect. I pinned all top-level directories to mds.0 and then enabled ceph.dir.pin.distributed for a f

[ceph-users] Re: set rbd metadata 'conf_rbd_qos_bps_limit', make 'mkfs.xfs /dev/nbdX ' blocked

2020-12-07 Thread 912273...@qq.com
Hi,Jason: As discussed last time,After setting conf_rbd_qos_bps_limit, the speed of discard is also limited, which can make operation such as mkfs.xfs very slow, though we can add -K option to solve this problem, but we can't make sure that other operation or application never call discard inter

[ceph-users] Re: Provide more documentation for MDS performance tuning on large file systems

2020-12-07 Thread Dan van der Ster
On Mon, Dec 7, 2020 at 10:39 AM Janek Bevendorff wrote: > > > > What exactly do you set to 64k? > > We used to set mds_max_caps_per_client to 5, but once we started > > using the tuned caps recall config, we reverted that back to the > > default 1M without issue. > > mds_max_caps_per_client. A

[ceph-users] Re: Provide more documentation for MDS performance tuning on large file systems

2020-12-07 Thread Janek Bevendorff
What exactly do you set to 64k? We used to set mds_max_caps_per_client to 5, but once we started using the tuned caps recall config, we reverted that back to the default 1M without issue. mds_max_caps_per_client. As I mentioned, some clients hit this limit regularly and they aren't entir

[ceph-users] Re: Provide more documentation for MDS performance tuning on large file systems

2020-12-07 Thread Dan van der Ster
On Mon, Dec 7, 2020 at 9:42 AM Janek Bevendorff wrote: > > Thanks, Dan! > > I have played with many thresholds, including the decay rates. It is > indeed very difficult to assess their effects, since our workloads > differ widely depending on what people are working on at the moment. I > would nee

[ceph-users] dashboard 500 internal error when listing buckets

2020-12-07 Thread levin ng
Hi all, I just have done 15.2.7 installation with having 3 mon , 4 osd and 3 rgw, however after enabling rgw dashboard, the buckets page popup 500 internal error, but users and daemons are listed fine in the dashboard, radosgw-admin can list out everything without problem as well. I've tired to t

[ceph-users] Re: Provide more documentation for MDS performance tuning on large file systems

2020-12-07 Thread Janek Bevendorff
Thanks, Dan! I have played with many thresholds, including the decay rates. It is indeed very difficult to assess their effects, since our workloads differ widely depending on what people are working on at the moment. I would need to develop a proper benchmarking suite to simulate the differe