I recently upgraded to Quincy, and toggled on the BULK flag of a few pools. As
a result, my cluster has been spending the last several days shuffling data
while growing the pool pg counts. That in turn has resulted in a steadily
increasing number of pgs being flagged PG_NOT_DEEP_SCRUBBED. And
There do exist vfs_ceph and vfs_ceph_snapshots modules for Samba, at least in
theory.
https://www.samba.org/samba/docs/current/man-html/vfs_ceph.8.html
https://www.samba.org/samba/docs/current/man-html/vfs_ceph_snapshots.8.html
However, they don't exist in, for instance, the version of Samba in
Great, thank you both for the confirmation!
-Original Message-
From: Xiubo Li
Sent: Friday, October 21, 2022 8:43 AM
To: Rishabh Dave ; Edward R Huyer
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Re: MDS_CLIENT_LATE_RELEASE after setting up
scheduled CephFS snapshots
On 21/10
I recently set up scheduled snapshots on my CephFS filesystem, and ever since
the cluster has been intermittently going into HEALTH_WARN with an
MDS_CLIENT_LATE_RELEASE notification.
Specifically:
[WARN] MDS_CLIENT_LATE_RELEASE: 1 clients failing to respond to capability
release
mds.[
Sent: Wednesday, May 25, 2022 5:03 PM
To: Edward R Huyer
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Replacing OSD with DB on shared NVMe
In your example, you can login to the server in question with the OSD, and run
"ceph-volume lvm zap --osd-id --destroy" and it will purge
Ok, I'm not sure if I'm missing something or if this is a gap in ceph orch
functionality, or what:
On a given host all the OSDs share a single large NVMe drive for DB/WAL storage
and were set up using a simple ceph orch spec file. I'm replacing some of the
OSDs. After they've been removed wit
Actually, one other question occurred to me: Was your testing environment bare
metal or a cephadm containerized install? It shouldn't matter, and I don't
know that it does matter, but my environment is containerized.
--
Edward Huyer
-Original Message-----
From: Edwa
ons or entities other than the intended recipient is
prohibited. If you received this in error, please contact the sender and
destroy any copies of this information.
From: Ernesto Puerta [mailto:epuer...@redhat.com]
Sent: Tuesday, January 11, 2022 11:25 AM
To: Edward R Huyer
Cc: ceph-users@ceph.io
S
Ok, I think I've nearly got the dashboard working with SAML/Shibboleth
authentication, except for one thing: If a user authenticates via SAML, but a
corresponding dashboard user hasn't been created, it triggers a loop where the
browser gets redirected to a nonexistent dashboard unauthorized pag
> -Original Message-
> From: Carlos Mogas da Silva
> Sent: Wednesday, December 8, 2021 1:26 PM
> To: Edward R Huyer ; Marc ;
> ceph-users@ceph.io
> Subject: Re: [ceph-users] Re: Migration from CentOS7/Nautilus to CentOS
> Stream/Pacific
>
> On Wed, 2021-12-
> On Wed, 2021-12-08 at 16:06 +, Marc wrote:
> > >
> > > It isn't possible to upgrade from CentOS 7 to anything... At least
> > > without required massive hacks that may of may not work (and most
> > > likely won't).
> >
> > I meant wipe the os disk, install whatever, install nautilus and put
>
host’s
filesystem were visible inside the container, and how the container’s and
host’s paths differed.
From: Ernesto Puerta
Sent: Tuesday, November 2, 2021 6:38 AM
To: Edward R Huyer ; Sebastian Wagner
Cc: Yury Kirsanov ; ceph-users@ceph.io
Subject: Re: [ceph-users] Re: Doing SAML2 Auth With
: Edward R Huyer
Sent: Wednesday, October 27, 2021 9:31 AM
To: 'Ernesto Puerta'
Cc: Yury Kirsanov ; ceph-users@ceph.io
Subject: RE: [ceph-users] Re: Doing SAML2 Auth With Containerized mgrs
Thank you for the reply. Even if there’s a good reason for the CLI tool to not
send the conte
specific suggestions as to how to approach
this? I’m not familiar enough the details of the cephadm-deploy containers
specifically or containers in general to know where to start.
From: Ernesto Puerta
Sent: Wednesday, October 27, 2021 6:53 AM
To: Edward R Huyer
Cc: Yury Kirsanov ; ceph-users
No worries. It's a pretty specific problem, and the documentation could be
better.
-Original Message-
From: Yury Kirsanov
Sent: Monday, October 25, 2021 12:17 PM
To: Edward R Huyer
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: Doing SAML2 Auth With Containerized mgrs
Hi E
/cephadm/grafana_crt -i .crt
ceph config-key set mgr/cephadm/grafana_key -i .key
ceph orch reconfig grafana
ceph mgr module enable dashboard
Hope this helps!
Regards,
Yury.
On Tue, Oct 26, 2021 at 2:45 AM Edward R Huyer
mailto:erh...@rit.edu>> wrote:
Continuing my containerized Ceph adve
Continuing my containerized Ceph adventures
I'm trying to set up SAML2 auth for the dashboard (specifically pointing at the
institute Shibboleth service). The service requires the use of the x509
certificates. Following the instructions in the documentation (
https://docs.ceph.com/en/late
Gotcha. Thanks for the input regardless. I suppose I'll continue what I'm
doing, and plan on doing an upgrade via quay.io in the near future.
-Original Message-
From: Gregory Farnum
Sent: Monday, October 4, 2021 7:14 PM
To: Edward R Huyer
Cc: ceph-users@ceph.io
Subject:
arnum
Sent: Monday, October 4, 2021 2:33 PM
To: Edward R Huyer
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Daemon Version Mismatch (But Not Really?) After
Deleting/Recreating OSDs
On Mon, Oct 4, 2021 at 7:57 AM Edward R Huyer wrote:
>
> Over the summer, I upgraded my cluster from N
Over the summer, I upgraded my cluster from Nautilus to Pacific, and converted
to use cephadm after doing so. Over the past couple weeks, I've been
converting my OSDs to use NVMe drives for db+wal storage. Schedule a node's
worth of OSDs to be removed, wait for that to happen, delete the PVs a
I also just ran into what seems to be the same problem Chris did. Despite all
indicators visible to me saying my NVMe drive is non-rotational (including
/sys/block/nvme0n1/queue/rotational ), the Orchestrator would not touch it
until I specified it by model.
-Original Message-
From: Eu
1 10G
vde 08G
So the limit filter works as expected here. If I don't specify it, I wouldn't
get any OSDs because ceph-volume can't fit three DBs of size
3 GB onto the 8 GB disk.
Does that help?
Regards,
Eugen
Zitat von Edward R Huyer :
I recently upgraded my existing cluster to Pacific and cephadm, and need to
reconfigure all the (rotational) OSDs to use NVMe drives for db storage. I
think I have a reasonably good idea how that's going to work, but the use of
db_slots and limit in the OSD service specification have me scratch
23 matches
Mail list logo