Hi Sake,
I would start by decrementing max_mds by 1:
ceph fs set atlassian-prod max_mds 2
The mds.1 no longer restarts?
logs?
Le jeu. 21 déc. 2023 à 08:11, Sake Ceph a écrit :
> Starting a new thread, forgot subject in the previous.
> So our FS down. Got the following error, what can I do?
Starting a new thread, forgot subject in the previous.
So our FS down. Got the following error, what can I do?
# ceph health detail
HEALTH_ERR 1 filesystem is degraded; 1 mds daemon damaged
[WRN] FS_DEGRADED: 1 filesystem is degraded
fs atlassian/prod is degraded
[ERR] MDS_DAMAGE: 1 mds daemo
Is it possible to configure Ceph so that STS AssumeRoleWithWebIdentity
works with a Kubernetes serviceaccount token?
My goal is that a pod running in a Kubernetes cluster can call
AssumeRoleWithWebIdentity specifying an IAM role (previously created in
Ceph) and the Kubernetes oicd service acco
Hi all,I need your help! Our FS is degraded.Health: mds.1 is damagedCeph tell mds.1 damage lsResolve_mds: gid 1 not in mds mapBest regards, Sake ___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello ceph-users:
First ,sorry my english ...
I find a bug (16.2.10 and 18.2.1), but i do not know why
test steps:
1、config all hosts contain chronyd service
2、cephadm --image quay.io/ceph/ceph:v16.2.10 bootstrap --dashboard-password-noupdate --mon-ip 10.40.10.200 --cluster-network=10.40.10
Yes, this was a mistake (the 18.2.0 container was rebuilt and took over
the less-specific version tags). We've fixed it and don't expect it to
recur.
On 12/20/2023 6:49 AM, Simon Ironside wrote:
Hi All,
We're deploying a fresh Reef cluster now and noticed that cephadm
bootstrap deploys 18.2
On Tuesday, December 19, 2023 1:02:25 AM EST Eugen Block wrote:
> The option ‚—image‘ is to be used for the cephadm command, not the
> bootstrap command. So it should be like this:
>
> cephadm —image bootstrap…
>
> This is also covered by the link I provided (isolated environment).
>
Agreed,
(Zac Dover) "make check" fails now even for some docs builds. For example:
https://github.com/ceph/ceph/pull/54970, which is a simple edit of
ReStructured Text in doc/radosgw/compression.rst. Greg Farnum and Dan Mick
have already done preliminary investigation of this matter here:
https://ceph-stor
Hi All,
We're deploying a fresh Reef cluster now and noticed that cephadm
bootstrap deploys 18.2.0 and not 18.2.1. It appears this is because the
v18, v18.2 (and v18.2.0) tags are all pointing to the v18.2.0-20231212
tag since 16th December here:
https://quay.io/repository/ceph/ceph?tab=hist
Thanks, that is most useful to know!
The Ceph docs are very good except when they propagate obsolete
information. For example, using "ceph-deploy" on Octopus (my copy
didn't come with ceph-deploy - it used cephadm).
And, alas, nothing has been written to delineate differences between
containerize
Just to add a bit more information, the 'ceph daemon' command is still
valid, it just has to be issued inside of the containers:
quincy-1:~ # cephadm enter --name osd.0
Inferring fsid 1e6e5cb6-73e8-11ee-b195-fa163ee43e22
[ceph: root@quincy-1 /]# ceph daemon osd.0 config diff | head
{
"diff"
I can't speak for details of ceph-ansible. I don't use it because from
what I can see, ceph-ansible requires a lot more symmetry in the server
farm than I have.
It is, however, my understanding that cephadm is the preferred
installation and management option these days and it certainly helped
me t
Hi,
On Tue, Dec 19, 2023 at 9:16 PM Huseyin Cotuk wrote:
> After comparing the first blocks of running and failed OSDs, we found that HW
> crash caused a corruption on the first 23 bytes of block devices.
>
> First few bytes of the block device of a failed OSD contains:
>
> :
13 matches
Mail list logo