Hi, Priya
there was a GC related issue has been fixed :
https://tracker.ceph.com/issues/38454
Luminous backport:
https://tracker.ceph.com/issues/38714
PR is here:
https://github.com/ceph/ceph/pull/26601
try update your cluster to latest Luminous release
-
On Mon, Dec 7, 2020 at 1:28 PM Janek Bevendorff
wrote:
>
>
> > This sounds like there is one or a few clients acquiring too many
> > caps. Have you checked this? Are there any messages about the OOM
> > killer? What config changes for the MDS have you made?
>
> Yes, it's individual clients acquiri
Hi Patrick,
I haven't gone through this thread yet but I want to note for those
reading that we do now have documentation (thanks for the frequent
pokes Janek!) for the recall configurations:
https://docs.ceph.com/en/latest/cephfs/cache-configuration/#mds-recall
Please let us know if it's missi
This sounds like there is one or a few clients acquiring too many
caps. Have you checked this? Are there any messages about the OOM
killer? What config changes for the MDS have you made?
Yes, it's individual clients acquiring too my caps. I first ran the
adjusted recall settings you suggeste
Yes, use everywhere virtio-scsi (via kvm with discard='unmap'). 'lsblk
--discard' also shows discard is supported. vm's with xfs filesystem
seem to behave better.
-Original Message-
Cc: lordcirth; ceph-users
Subject: Re: [ceph-users] Re: guest fstrim not showing free space
What driver
I should have started out by asking how is the RBD mounted? Directly
on a host of through a hypervisor like KVM?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
What driver did you use to mount the volumes? I believe only
virtio-scsi supports fstrim commands.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Is there a solution for this? Because I have some more old vm's having
x100GB's of free fs.
-Original Message-
To: lordcirth
Cc: ceph-users
Subject: [ceph-users] Re: guest fstrim not showing free space
Yes! Indeed old one, with ext4 still.
-Original Message-
Sent: Monday, D
Hi Dan & Janek,
On Sat, Dec 5, 2020 at 6:26 AM Dan van der Ster wrote:
> My understanding is that the recall thresholds (see my list below)
> should be scaled proportionally. OTOH, I haven't played with the decay
> rates (and don't know if there's any significant value to tuning
> those).
I have
On Sat, Dec 5, 2020 at 5:41 AM Janek Bevendorff
wrote:
>
> On 05/12/2020 09:26, Dan van der Ster wrote:
> > Hi Janek,
> >
> > I'd love to hear your standard maintenance procedures. Are you
> > cleaning up those open files outside of "rejoin" OOMs ?
>
> No, of course not. But those rejoin problems
Thanks for bringing this up.
We need to update Cheroot in Fedora and EPEL 8. I've opened
https://src.fedoraproject.org/rpms/python-cheroot/pull-request/3 to
get this into Fedora first.
I've published an el8 RPM at
https://fedorapeople.org/~ktdreyer/bz1868629/ for early testing. I can
bring up a "
Hi,
We have a ceph 15.2.7 deployment using cephadm under podman w/ systemd.
We've run into what we believe is:
https://github.com/ceph/ceph-container/issues/1748
https://tracker.ceph.com/issues/47875
In our case, eventually the mgr container stops emitting output/logging. We
are polling with ext
Yes! Indeed old one, with ext4 still.
-Original Message-
Sent: Monday, December 07, 2020 3:58 PM
Cc: ceph-users
Subject: Re: [ceph-users] guest fstrim not showing free space
Is the VM's / ext4?
On Sun., Dec. 6, 2020, 12:57 p.m. Marc Roos,
wrote:
I have a 74GB vm with 34466
Is the VM's / ext4?
On Sun., Dec. 6, 2020, 12:57 p.m. Marc Roos,
wrote:
>
>
> I have a 74GB vm with 34466MB free space. But when I do fstrim / 'rbd
> du' shows still 60GB used.
> When I fill the 34GB of space with an image, delete it and do again the
> fstrim 'rbd du' still shows 59GB used.
>
>
All:
I recently was tasked with building and implementing Ceph in an
environment where FIPS cryptography is strictly enforced. As such, I ran into
several issues regarding Ceph's use of low-level cryptographic functions since
those are strictly forbidden when OpenSSL is in FIPS mode. The
Never mind, when I enable it on a more busy directory, I do see new
ephemeral pins popping up. Just not on the directories I set it on
originally. Let's see how that holds up.
On 07/12/2020 13:04, Janek Bevendorff wrote:
Thanks. I tried playing around a bit with
mds_export_ephemeral_distribute
Thanks. I tried playing around a bit with
mds_export_ephemeral_distributed just now, because it's pretty much the
same thing that your script does manually. Unfortunately, it seems to
have no effect.
I pinned all top-level directories to mds.0 and then enabled
ceph.dir.pin.distributed for a f
Hi,Jason:
As discussed last time,After setting conf_rbd_qos_bps_limit, the speed of
discard is also limited,
which can make operation such as mkfs.xfs very slow, though we can add -K
option to solve this problem,
but we can't make sure that other operation or application never call discard
inter
On Mon, Dec 7, 2020 at 10:39 AM Janek Bevendorff
wrote:
>
>
> > What exactly do you set to 64k?
> > We used to set mds_max_caps_per_client to 5, but once we started
> > using the tuned caps recall config, we reverted that back to the
> > default 1M without issue.
>
> mds_max_caps_per_client. A
What exactly do you set to 64k?
We used to set mds_max_caps_per_client to 5, but once we started
using the tuned caps recall config, we reverted that back to the
default 1M without issue.
mds_max_caps_per_client. As I mentioned, some clients hit this limit
regularly and they aren't entir
On Mon, Dec 7, 2020 at 9:42 AM Janek Bevendorff
wrote:
>
> Thanks, Dan!
>
> I have played with many thresholds, including the decay rates. It is
> indeed very difficult to assess their effects, since our workloads
> differ widely depending on what people are working on at the moment. I
> would nee
Hi all, I just have done 15.2.7 installation with having 3 mon , 4 osd and
3 rgw, however after enabling rgw dashboard, the buckets page popup 500
internal error, but users and daemons are listed fine in the dashboard,
radosgw-admin can list out everything without problem as well.
I've tired to t
Thanks, Dan!
I have played with many thresholds, including the decay rates. It is
indeed very difficult to assess their effects, since our workloads
differ widely depending on what people are working on at the moment. I
would need to develop a proper benchmarking suite to simulate the
differe
23 matches
Mail list logo