Hello!
I have a cluster with Datacenter crushmap (A+B).(9+9 = 18 servers)
The cluster started with v12.2.0 Luminous 4 years ago.
All these years I upgraded the Cluster Luminous > Mimic > v14.2.16 Nautilus.
Now I have a weird issue. When I add a mon or shutdown a while and
start it up again, all th
Hello,
We are in the process of bringing new hardware online that will allow us
to get all of the MGRs, MONs, MDSs, etc. off of our OSD nodes and onto
dedicated management nodes. I've created MGRs and MONs on the new
nodes, and I found procedures for disabling the MONs from the OSD nodes.
We just upgraded our cluster from Lumious to Nautilus and after a few
days one of our MDS servers is getting:
2021-03-28 18:06:32.304 7f57c37ff700 5 mds.beacon.sun-gcs01-mds02
Sending beacon up:standby seq 16
2021-03-28 18:06:32.304 7f57c37ff700 20 mds.beacon.sun-gcs01-mds02
sender thread waiting
HI Robert,
Just checked your email on ceph users list.
I will try to look deep into the question.
For now i have a qery related to the uphrade itself.
Is it possible for you to send any link /documents that you are following
to upgrade ceph.
I am trying to upgrade ceph cluster using ceph ansible o
Request the moderators to approve the same.
It is since long and the solution to the issue is not found yet.
-Lokendra
On Tue, 23 Mar 2021, 10:17 Lokendra Rathour,
wrote:
> Hi Team,
> I am trying to upgrade my existing Ceph Cluster (using Ceph-ansible) from
> current release Octopus to pacific
Hello there,
Thank you for your response.
There is no error at syslog, dmesg, or SMART.
# ceph health detail
HEALTH_WARN Too many repaired reads on 2 OSDs
OSD_TOO_MANY_REPAIRS Too many repaired reads on 2 OSDs
osd.29 had 38 reads repaired
osd.16 had 17 reads repaired
How can i clear thi
Thank you Stefan and Josh!
Tony
From: Josh Baergen
Sent: March 28, 2021 08:28 PM
To: Tony Liu
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Re: Do I need to update ceph.conf and restart each
OSD after adding more MONs?
As was mentioned in this thread,