Em qui., 2 de mai. de 2024 às 06:20, Matthew Vernon
escreveu:
> On 24/04/2024 13:43, Bailey Allison wrote:
>
> > A simple ceph-volume lvm activate should get all of the OSDs back up and
> > running once you install the proper packages/restore the ceph config
> > file/etc.,
>
> What's the equivale
On 24/04/2024 13:43, Bailey Allison wrote:
A simple ceph-volume lvm activate should get all of the OSDs back up and
running once you install the proper packages/restore the ceph config
file/etc.,
What's the equivalent procedure in a cephadm-managed cluster?
Thanks,
Matthew
__
Oh I'm sorry, Peter, I don't know why I wrote Karl. I apologize.
Zitat von Eugen Block :
Hi Karl,
I must admit that I haven't dealt with raw OSDs yet. We've been
usually working with LVM based clusters (some of the customers used
SUSE's product SES) and in SES there was a recommendation to
Hi Karl,
I must admit that I haven't dealt with raw OSDs yet. We've been
usually working with LVM based clusters (some of the customers used
SUSE's product SES) and in SES there was a recommendation to switch to
LVM before adopting with cephadm. So we usually did a rebuild of all
OSDs bef
Hi,
If I may, I would try something like this but I haven't tested this so
please take this with a grain of salt...
1.I would reinstall the Operating System in this case...
Since the root filesystem is accessible but the OS is not bootable, the
most straightforward approach would be to perform a
Thanks Eugen and others for the advice. These are not, however, lvm-based
OSDs. I can get a list of what is out there with:
cephadm ceph-volume raw list
and tried
cephadm ceph-volume raw activate
but it tells me I need to manually run activate.
I was able to find the correct data disks with fo
In addition to Nico's response, three years ago I wrote a blog post
[1] about that topic, maybe that can help as well. It might be a bit
outdated, what it definitely doesn't contain is this command from the
docs [2] once the server has been re-added to the host list:
ceph cephadm osd activa
Hey Peter,
A simple ceph-volume lvm activate should get all of the OSDs back up and
running once you install the proper packages/restore the ceph config
file/etc.,
If the node was also a mon/mgr you can simply re-add those services.
Regards,
Bailey
> -Original Message-
> From: Peter va
Hey Peter,
the /var/lib/ceph directories mainly contain "meta data" that, depending
on the ceph version and osd setup, can even be residing on tmpfs by
default.
Even if the data was on-disk, they are easy to recreate: