Hi Ben,
It looks like you forgot to attach the screenshots.
Regards,
Nizam
On Wed, Jun 21, 2023, 12:23 Ben wrote:
> Hi,
>
> I got many critical alerts in ceph dashboard. Meanwhile the cluster shows
> health ok status.
>
> See attached screenshot for detail. My questions are, are they real aler
Hi,
can you share more details what exactly you did? How did you remove
the nodes? Hopefully, you waited for the draining to finish? But if
the remaining OSDs wait for removed OSDs it sounds like the draining
was not finished.
Zitat von Malte Stroem :
Hello,
we removed some nodes from o
Hello Eugen,
thank you. Yesterday I thought: Well, Eugen can help!
Yes, we drained the nodes. It needed two weeks to finish the process,
and yes, I think this is the root cause.
So we still have the nodes but when I try to restart one of those OSDs
it still cannot join:
Jun 21 09:46:03 cep
Hi,
Yes, we drained the nodes. It needed two weeks to finish the
process, and yes, I think this is the root cause.
So we still have the nodes but when I try to restart one of those
OSDs it still cannot join:
if the nodes were drained successfully (can you confirm that all PGs
were active+
Hi Igor,
thank you for your ansere!
>first of all Quincy does have a fix for the issue, see
>https://tracker.ceph.com/issues/53466 (and its Quincy counterpart
>https://tracker.ceph.com/issues/58588)
Thank you I somehow missed that release, good to know!
>SSD or HDD? Standalone or shared DB volu
I've update the dc3 site from octopus to pacific and the problem is still
there.
I find it very weird that in only happens from one single zonegroup to the
master and not from the other two.
Am Mi., 21. Juni 2023 um 01:59 Uhr schrieb Boris Behrens :
> I recreated the site and the problem still pe
On 20/06/2023 01:16, Work Ceph wrote:
I see, thanks for the feedback guys!
It is interesting that Ceph Manager does not allow us to export iSCSI
blocks without selecting 2 or more iSCSI portals. Therefore, we will
always use at least two, and as a consequence that feature is not
going to be
Hi,
Will that try to be smart and just restart a few at a time to keep things
up and available. Or will it just trigger a restart everywhere
simultaneously.
basically, that's what happens for example during an upgrade if
services are restarted. It's designed to be a rolling upgrade
procedu
Hello Eugen,
recovery and rebalancing was finished however now all PGs show missing OSDs.
Everything looks like the PGs are missing OSDs although it finished
correctly.
As if we shut down the servers immediately.
But we removed the nodes the way it is described in the documentation.
We just
Hi Ben, also if some alerts are noisy, we have option in dashboard to
silence those alerts.
Also, can you provide the list of critical alerts that you see?
On Wed, 21 Jun 2023 at 12:48, Nizamudeen A wrote:
> Hi Ben,
>
> It looks like you forgot to attach the screenshots.
>
> Regards,
> Nizam
>
Yes, I am missing create:
ceph osd create uuid id
This works!
Best,
Malte
Am 20.06.23 um 18:42 schrieb Malte Stroem:
Well, things I would do:
- add the keyring to ceph auth
ceph auth add osd.XX osd 'allow *' mon 'allow rwx' -i
/var/lib/ceph/uuid(osd.XX/keyring
- add OSD to crush
ceph os
Hi Carsten,
please also note a workaround to bring the osds back for e.g. data
recovery - set bluefs_shared_alloc_size to 32768.
This will hopefully allow OSD to startup and pull data out of it. But I
wouldn't discourage you from using such OSDs long term as fragmentation
might evolve and th
Aaaand another dead end: there is too much meta-data involved (bucket and
object ACLs, lifecycle, policy, …) that won’t be possible to perfectly migrate.
Also, lifecycles _might_ be affected if mtimes change.
So, I’m going to try and go back to a single-cluster multi-zone setup. For that
I’m go
I still can’t really grasp what might have happened here. But could
you please clarify which of the down OSDs (or Hosts) are supposed to
be down and which you’re trying to bring back online? Obviously osd.40
is one of your attempts. But what about the hosts cephx01 and cephx08?
Are those th
Does quincy automatically switch existing things to 4k or do you need to do a
new ost to get the 4k size?
Thanks,
Kevin
From: Igor Fedotov
Sent: Wednesday, June 21, 2023 5:56 AM
To: Carsten Grommel; ceph-users@ceph.io
Subject: [ceph-users] Re: Ceph Pacif
A lot of pg in inconsistent state occurred.
Most of them were repaired with ceph pg repair all, but in the case of 3 pg as
shown below, it does not proceed further with failed_repair status.
[root@cephvm1 ~]# ceph health detail
HEALTH_ERR 30 scrub errors; Too many repaired reads on 7 OSDs; Possi
On 6/21/23 11:20, Malte Stroem wrote:
Hello Eugen,
recovery and rebalancing was finished however now all PGs show missing
OSDs.
Everything looks like the PGs are missing OSDs although it finished
correctly.
As if we shut down the servers immediately.
But we removed the nodes the way it is
17 matches
Mail list logo