On 11/16/23 22:39, Ilya Dryomov wrote:
On Thu, Nov 16, 2023 at 3:21 AM Xiubo Li wrote:
Hi Matt,
On 11/15/23 02:40, Matt Larson wrote:
On CentOS 7 systems with the CephFS kernel client, if the data pool has a
`nearfull` status there is a slight reduction in write speeds (possibly
20-50% fewer
Hi,
I think it should be in /var/log/ceph/ceph-mgr..log, probably you
can reproduce this error again and hopefully
you'll be able to see a python traceback or something related to rgw in the
mgr logs.
Regards
On Thu, Nov 16, 2023 at 7:43 PM Jean-Marc FONTANA
wrote:
> Hello,
>
> These are the l
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Everything Open (auspiced by Linux Australia) is happening again in
2024. The CFP closes at the end of this weekend (November 19):
https://2024.everythingopen.au/programme/proposals/
More details below.
Forwarded Message
Date: Sun, 15 Oct 2023 09:16:31 +1000
From: Everythi
On 13-11-2023 22:35, jarul...@uwyo.edu wrote:
Hi everyone,
I have a storage cluster running RHCS v4 (old, I know) and am looking to upgrade
it soon. I would also like to migrate from RHCS to the open source version of
Ceph at some point, as our support contract with RedHat for Ceph is likely g
Hi, Experts,
we have an cephfs cluster 16.2.* run with multi active mds, and we have some
old machine run with ubuntu 16.04 , so we mount these client using ceph-fuse.
After a full mds process restart, all of these old ubuntu 16.04 clients cannot
connect to ceph , `ls -lrth` or `df -hT` hang o
Rook already ran the tests against Guillaume's change directly, it looks
good to us. I don't see a new latest-reef-devel image tag yet, but will
plan on rerunning the tests when that tag is updated.
Thanks,
Travis
On Thu, Nov 16, 2023 at 8:27 AM Adam King wrote:
> Guillaume ran that patch throu
On Thu, Nov 16, 2023 at 5:26 PM Matt Larson wrote:
>
> Ilya,
>
> Thank you for providing these discussion threads on the Kernel fixes for
> where there was a change and details on this affects the clients.
>
> What is the expected behavior in CephFS client when there are multiple data
> pools
Ilya,
Thank you for providing these discussion threads on the Kernel fixes for
where there was a change and details on this affects the clients.
What is the expected behavior in CephFS client when there are multiple
data pools in the CephFS? Does having 'nearfull' in any data pool in the
CephFS
Guillaume ran that patch through the orch suite earlier today before
merging. I think we should be okay on that front. The issue it's fixing was
also particular to rook iirc, which teuthology doesn't cover.
On Thu, Nov 16, 2023 at 10:18 AM Yuri Weinstein wrote:
> OK I will start building.
>
> Tr
OK I will start building.
Travis, Adam King - any need to rerun any suites?
On Thu, Nov 16, 2023 at 7:14 AM Guillaume Abrioux wrote:
>
> Hi Yuri,
>
>
>
> Backport PR [2] for reef has been merged.
>
>
>
> Thanks,
>
>
>
> [2] https://github.com/ceph/ceph/pull/54514/files
>
>
>
> --
>
> Guillaume A
Hi Yuri,
Backport PR [2] for reef has been merged.
Thanks,
[2] https://github.com/ceph/ceph/pull/54514/files
--
Guillaume Abrioux
Software Engineer
From: Guillaume Abrioux
Date: Wednesday, 15 November 2023 at 21:02
To: Yuri Weinstein , Nizamudeen A ,
Guillaume Abrioux , Travis Nielsen
Cc: A
On Thu, Nov 16, 2023 at 2:31 AM Gregory Farnum wrote:
> There are versioning and dependency issues (both of packages, and compiler
> toolchain pieces) which mean that the existing reef releases do not build
> on Debian. Our upstream support for Debian has always been inconsistent
>
i think, to b
On Thu, Nov 16, 2023 at 3:21 AM Xiubo Li wrote:
>
> Hi Matt,
>
> On 11/15/23 02:40, Matt Larson wrote:
> > On CentOS 7 systems with the CephFS kernel client, if the data pool has a
> > `nearfull` status there is a slight reduction in write speeds (possibly
> > 20-50% fewer IOPS).
> >
> > On a simi
Hello,
These are the last lines of /var/log/ceph/cephadm.log of the active mgr
machine after an error occured.
As I don't feel this will be very helpfull, would you please tell us
where to look ?
Best regards,
JM Fontana
2023-11-16 14:45:08,200 7f341eae8740 DEBUG
--
Hello.
Great Information, I will keep in mind.
Thank you :)
Nguyen Huu Khoi
On Thu, Nov 16, 2023 at 5:51 PM Janne Johansson wrote:
> Den tors 16 nov. 2023 kl 08:43 skrev Nguyễn Hữu Khôi
> :
> >
> > Hello,
> > Yes, I see it does not exceed RSS but I see in "ceph orch ps". it is
> over
> > targe
Partially answering my own question. I think it is possible to tweak the
existing parameters to achieve what I'm looking for on average. The main reason
I want to use the internal scheduler is the high number of PGs on some pools,
which I actually intend to increase even further. For such pools
Hello,
can you also add the mgr logs at the time of this error?
Regards,
On Thu, Nov 16, 2023 at 4:12 PM Jean-Marc FONTANA
wrote:
> Hello David,
>
> We tried what you pointed in your message. First, it was set to
>
> "s3, s3website, swift, swift_auth, admin, sts, iam, subpub"
>
> We tried to s
Den tors 16 nov. 2023 kl 08:43 skrev Nguyễn Hữu Khôi
:
>
> Hello,
> Yes, I see it does not exceed RSS but I see in "ceph orch ps". it is over
> target. Does Mem Use include cache, I am right?
>
> NAMEHOST PORTSSTATUS REFRESHED
> AGE MEM USE MEM LIM VER
That's really useful to know, thanks Daniel.
On 15/11/2023 19:07, Daniel Baumann wrote:
On 11/15/23 19:52, Daniel Baumann wrote:
for 18.2.0, there's only one trivial thing needed:
https://git.progress-linux.org/packages/graograman-backports-extras/ceph/commit/?id=ed59c69244ec7b81ec08f7a2d1a1f0a
Den tors 16 nov. 2023 kl 00:30 skrev Giuliano Maggi
:
> Hi,
>
> I’d like to remove some “spurious" data:
>
> root@nerffs03:/# ceph df
> --- RAW STORAGE ---
> CLASS SIZEAVAILUSED RAW USED %RAW USED
> hdd1.0 PiB 1.0 PiB 47 GiB47 GiB 0
> TOTAL 1.0 PiB 1.0 PiB 47 GiB
Hello David,
We tried what you pointed in your message. First, it was set to
"s3, s3website, swift, swift_auth, admin, sts, iam, subpub"
We tried to set it to "s3, s3website, swift, swift_auth, admin, sts,
iam, subpub, notifications"
and then to "s3, s3website, swift, swift_auth, admin, sts,
Hello.
I will read more about it.
Thank you :)
Nguyen Huu Khoi
On Thu, Nov 16, 2023 at 3:21 PM Zakhar Kirpichenko wrote:
> Orch ps seems to show virtual size set instead of resident size set.
>
> /Z
>
> On Thu, 16 Nov 2023 at 09:43, Nguyễn Hữu Khôi
> wrote:
>
>> Hello,
>> Yes, I see it does
Hi All
(apologies if you get again, I suspect mails from my @science.ru.nl
account get dropped by most receiving mail servers, due to the strict
DMARC policy (p=reject) in place)
after a long while being in health_err state (due to an unfound
object, which we eventually decided to "forget"), we ar
> Docs question: https://tracker.ceph.com/issues/11385: Can a member of the
> community just raise a PR attempting to standardize commands, without
> coordinating with a team?
In this case I think I would recommend having both "rm" and "del" do
the same thing. I agree that this kind of mixup cau
Orch ps seems to show virtual size set instead of resident size set.
/Z
On Thu, 16 Nov 2023 at 09:43, Nguyễn Hữu Khôi
wrote:
> Hello,
> Yes, I see it does not exceed RSS but I see in "ceph orch ps". it is over
> target. Does Mem Use include cache, I am right?
>
> NAMEHOST
26 matches
Mail list logo